diff options
| author | adamhrv <adam@ahprojects.com> | 2019-06-27 23:58:23 +0200 |
|---|---|---|
| committer | adamhrv <adam@ahprojects.com> | 2019-06-27 23:58:23 +0200 |
| commit | 852e4c1e36c38f57f80fc5d441da82d5991b2212 (patch) | |
| tree | 0c8bc3bbcb6c679e28ba387d0c1e47fb3d16830a /site/public | |
| parent | ae165ef1235a6997d5791ca241fd3fd134202c92 (diff) | |
update public
Diffstat (limited to 'site/public')
18 files changed, 1576 insertions, 83 deletions
diff --git a/site/public/about/index.html b/site/public/about/index.html index 64cc2e30..16a2e967 100644 --- a/site/public/about/index.html +++ b/site/public/about/index.html @@ -89,8 +89,8 @@ </ul> </div><div class='column'><h5>Contributing Researchers</h5> <ul> -<li>Berit Gilma</li> <li>Beth (aka Ms. Celeb)</li> +<li>Berit Gilma</li> <li>Mathana Stender</li> </ul> </div><div class='column'><h5>Code and Libraries</h5> diff --git a/site/public/about/news/index.html b/site/public/about/news/index.html index 5c5c6c61..fcba7877 100644 --- a/site/public/about/news/index.html +++ b/site/public/about/news/index.html @@ -63,9 +63,14 @@ <li><a href="/about/attribution/">Attribution</a></li> <li><a href="/about/legal/">Legal / Privacy</a></li> </ul> -</section><p>Since launching MegaPixels in April 2019, several of the datasets mentioned have disappeared and one surveillance workshop was canceled. Below is a list of responses, reactions, and press:</p> +</section><p>Since launching MegaPixels in April 2019, several of the datasets mentioned have disappeared and one surveillance workshop was canceled (then uncanceled).B elow is a timeline of events, responses, reactions, and press:</p> <h5>June 2019</h5> <ul> +<li>June 24: Les Echos (FR) writes about MS Celeb dataset and <a href="https://www.lesechos.fr/tech-medias/intelligence-artificielle/le-mariage-explosif-de-nos-donnees-et-de-lia-1031813">Le mariage explosif de nos données et de l'IA</a> (The explosive combination of our data and AI)</li> +<li>June 22: La Stamp (Italy) <a href="https://www.lastampa.it/2019/06/22/tecnologia/microsoft-ha-cancellato-il-suo-database-per-il-riconoscimento-facciale-PWwLGmpO1fKQdykMZVBd9H/pagina.html">writes about Microsoft's removal of the MS Celeb dataset</a></li> +<li>June 15: De Tijd (Belgium) <a href="https://www.tijd.be/dossier/legrandinconnu/brainwash/10136670.html">writes about Brainwash head dataset</a></li> +<li>June 13: <a href="https://www.dukechronicle.com/article/2019/06/duke-university-video-analysis-research-at-duke-carlo-tomasi">Creator of Duke MTMC dataset apologizes to students recorded for surveillance research</a> to student body and university: "I take full responsibility for my mistakes, and I apologize to all people who were recorded and to Duke for their consequences"</li> +<li>Jun 12: Duke Chronicle, Duke University's student paper, investigates Duke MTMC dataset, confirms it violated IRB: <a href="https://www.dukechronicle.com/article/2019/06/duke-university-facial-recognition-data-set-study-surveillance-video-students-china-uyghur">"A Duke study recorded thousands of students’ faces. Now they’re being used all over the world"</a></li> <li>June 7: Additional coverage of FT's story by <a href="https://www.bbc.com/news/technology-48555149">BBC</a>, <a href="https://www.spiegel.de/netzwelt/web/microsoft-gesichtserkennung-datenbank-mit-zehn-millionen-fotos-geloescht-a-1271221.html">Spiegel.de</a>, <a href="https://www.irishtimes.com/business/technology/microsoft-quietly-deletes-largest-public-face-recognition-data-set-1.3916825">IrishTimes</a>, and <a href="https://gizmodo.com/microsoft-quietly-pulls-its-database-of-100-000-faces-u-1835296212">Gizmodo</a></li> <li>June 6: Financial Times covers the abrupt disappearance of four facial recognition datasets: <a href="https://www.ft.com/content/7d3e0d6a-87a0-11e9-a028-86cea8523dc2">Microsoft quietly deletes largest public face recognition data set</a> by Madhumita Murgia</li> <li>June 2: A person tracking surveillance workshop at CVPR (<a href="https://reid-mct.github.io/2019/">reid-mct.github.io/2019</a>) has been canceled due to the <a href="/datasets/duke_mtmc">Duke MTMC dataset</a> no longer being available: "Due to some unforeseen circumstances, the test data has not been available. The multi-target multi-camera tracking and person re-identification challenge is canceled. We sincerely apologize for any inconvenience caused."</li> diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html index 9bae51a1..9a70a3f6 100644 --- a/site/public/datasets/duke_mtmc/index.html +++ b/site/public/datasets/duke_mtmc/index.html @@ -75,7 +75,7 @@ </div><div class='meta'> <div class='gray'>Website</div> <div><a href='http://vision.cs.duke.edu/DukeMTMC/' target='_blank' rel='nofollow noopener'>duke.edu</a></div> - </div></div><p>Duke MTMC (Multi-Target, Multi-Camera) is a dataset of surveillance video footage taken on Duke University's campus in 2014 and is used for research and development of video tracking systems, person re-identification, and low-resolution facial recognition. The dataset contains over 14 hours of synchronized surveillance video from 8 cameras at 1080p and 60 FPS, with over 2 million frames of 2,000 students walking to and from classes. The 8 surveillance cameras deployed on campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy"<a class="footnote_shim" name="[^duke_mtmc_orig]_1"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 1">1</a>.</p> + </div></div><p>Duke MTMC (Multi-Target, Multi-Camera) is a dataset of surveillance video footage taken on Duke University's campus in 2014 and is used for research and development of video tracking systems, person re-identification, and low-resolution facial recognition. The dataset contains over 14 hours of synchronized surveillance video from 8 cameras at 1080p and 60 FPS, with over 2 million frames of 2,000 students walking to and from classes. The 8 surveillance cameras deployed on campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".<a class="footnote_shim" name="[^duke_mtmc_orig]_1"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 1">1</a></p> <p>For this analysis of the Duke MTMC dataset over 100 publicly available research papers that used the dataset were analyzed to find out who's using the dataset and where it's being used. The results show that the Duke MTMC dataset has spread far beyond its origins and intentions in academic research projects at Duke University. Since its publication in 2016, more than twice as many research citations originated in China as in the United States. Among these citations were papers links to the Chinese military and several of the companies known to provide Chinese authorities with the oppressive surveillance technology used to monitor millions of Uighur Muslims.</p> <p>In one 2018 <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_Attention-Aware_Compositional_Network_CVPR_2018_paper.pdf">paper</a> jointly published by researchers from SenseNets and SenseTime (and funded by SenseTime Group Limited) entitled <a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">Attention-Aware Compositional Network for Person Re-identification</a>, the Duke MTMC dataset was used for "extensive experiments" on improving person re-identification across multiple surveillance cameras with important applications in suspect tracking. Both SenseNets and SenseTime have been linked to the providing surveillance technology to monitor Uighur Muslims in China. <a class="footnote_shim" name="[^xinjiang_nyt]_1"> </a><a href="#[^xinjiang_nyt]" class="footnote" title="Footnote 4">4</a><a class="footnote_shim" name="[^sensetime_qz]_1"> </a><a href="#[^sensetime_qz]" class="footnote" title="Footnote 2">2</a><a class="footnote_shim" name="[^sensenets_uyghurs]_1"> </a><a href="#[^sensenets_uyghurs]" class="footnote" title="Footnote 3">3</a></p> </section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_reid_montage.jpg' alt=' A collection of 1,600 out of the approximately 2,000 students and pedestrians in the Duke MTMC dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification, and eventually the QMUL SurvFace face recognition dataset. Open Data Commons Attribution License.'><div class='caption'> A collection of 1,600 out of the approximately 2,000 students and pedestrians in the Duke MTMC dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification, and eventually the QMUL SurvFace face recognition dataset. Open Data Commons Attribution License.</div></div></section><section><p>Despite <a href="https://www.hrw.org/news/2017/11/19/china-police-big-data-systems-violate-privacy-target-dissent">repeated</a> <a href="https://www.hrw.org/news/2018/02/26/china-big-data-fuels-crackdown-minority-region">warnings</a> by Human Rights Watch that the authoritarian surveillance used in China represents a humanitarian crisis, researchers at Duke University continued to provide open access to their dataset for anyone to use for any project. As the surveillance crisis in China grew, so did the number of citations with links to organizations complicit in the crisis. In 2018 alone there were over 90 research projects happening in China that publicly acknowledged using the Duke MTMC dataset. Amongst these were projects from CloudWalk, Hikvision, Megvii (Face++), SenseNets, SenseTime, Beihang University, China's National University of Defense Technology, and the PLA's Army Engineering University.</p> diff --git a/site/public/datasets/helen/index.html b/site/public/datasets/helen/index.html new file mode 100644 index 00000000..a7ada42a --- /dev/null +++ b/site/public/datasets/helen/index.html @@ -0,0 +1,169 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels: HELEN</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="HELEN Face Dataset" /> + <meta property="og:title" content="MegaPixels: HELEN"/> + <meta property="og:type" content="website"/> + <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/> + <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/helen/assets/background.jpg" /> + <meta property="og:url" content="https://megapixels.cc/datasets/helen/"/> + <meta property="og:site_name" content="MegaPixels" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/> + <meta name="apple-mobile-web-app-status-bar-style" content="black"> + <meta name="apple-mobile-web-app-capable" content="yes"> + + <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png"> + <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png"> + <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png"> + <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png"> + <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png"> + <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png"> + <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png"> + <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png"> + <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png"> + <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png"> + <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png"> + <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png"> + <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png"> + <link rel="manifest" href="/assets/img/favicon/manifest.json"> + <meta name="msapplication-TileColor" content="#ffffff"> + <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> + <meta name="theme-color" content="#ffffff"> + + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> + <link rel='stylesheet' href='/assets/css/mobile.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + <div class='page_name'>Helen Dataset</div> + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/about/news">News</a> + </div> + </header> + <div class="content content-dataset"> + + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/helen/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>HELEN Face Dataset</span></div><div class='hero_subdesc'><span class='bgpad'>HELEN (under development) +</span></div></div></section><section><h2>HELEN</h2> +</section><section><div class='right-sidebar'><div class='meta'> + <div class='gray'>Published</div> + <div>2012</div> + </div><div class='meta'> + <div class='gray'>Images</div> + <div>2,330 </div> + </div><div class='meta'> + <div class='gray'>Purpose</div> + <div>facial feature localization algorithm</div> + </div><div class='meta'> + <div class='gray'>Website</div> + <div><a href='http://www.ifp.illinois.edu/~vuongle2/helen/' target='_blank' rel='nofollow noopener'>illinois.edu</a></div> + </div></div><p>[ page under development ]</p> +</section><section> + <h3>Who used Helen Dataset?</h3> + + <p> + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. + </p> + + </section> + +<section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> + <div class="applet" data-payload="{"command": "chart"}"></div> +</section> + +<section class="applet_container"> + <div class="applet" data-payload="{"command": "piechart"}"></div> +</section> + +<section> + + <h3>Information Supply chain</h3> + + <p> + To help understand how Helen Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Helen Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. + </p> + + </section> + +<section class="applet_container fullwidth"> + <div class="applet" data-payload="{"command": "map"}"></div> +</section> + +<div class="caption"> + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + </ul> + <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > +</div> + + +<section class="applet_container"> + + <h3>Dataset Citations</h3> + <p> + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>. + </p> + + <div class="applet" data-payload="{"command": "citations"}"></div> +</section><section> + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h2>Supplementary Information</h2> + +</section><section> + + <h4>Cite Our Work</h4> + <p> + + If you find this analysis helpful, please cite our work: + +<pre id="cite-bibtex"> +@online{megapixels, + author = {Harvey, Adam. LaPlace, Jules.}, + title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets}, + year = 2019, + url = {https://megapixels.cc/}, + urldate = {2019-04-18} +}</pre> + + </p> +</section> + + </div> + <footer> + <ul class="footer-left"> + <li><a href="/">MegaPixels.cc</a></li> + <li><a href="/datasets/">Datasets</a></li> + <li><a href="/about/">About</a></li> + <li><a href="/about/news/">News</a></li> + <li><a href="/about/legal/">Legal & Privacy</a></li> + </ul> + <ul class="footer-right"> + <li>MegaPixels ©2017-19 <a href="https://ahprojects.com">Adam R. Harvey</a></li> + <li>Made with support from <a href="https://mozilla.org">Mozilla</a></li> + </ul> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/datasets/ibm_dif/index.html b/site/public/datasets/ibm_dif/index.html new file mode 100644 index 00000000..1c465f93 --- /dev/null +++ b/site/public/datasets/ibm_dif/index.html @@ -0,0 +1,172 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels: MegaFace</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="MegaFace Dataset" /> + <meta property="og:title" content="MegaPixels: MegaFace"/> + <meta property="og:type" content="website"/> + <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/> + <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/ibm_dif/assets/background.jpg" /> + <meta property="og:url" content="https://megapixels.cc/datasets/ibm_dif/"/> + <meta property="og:site_name" content="MegaPixels" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/> + <meta name="apple-mobile-web-app-status-bar-style" content="black"> + <meta name="apple-mobile-web-app-capable" content="yes"> + + <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png"> + <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png"> + <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png"> + <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png"> + <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png"> + <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png"> + <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png"> + <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png"> + <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png"> + <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png"> + <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png"> + <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png"> + <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png"> + <link rel="manifest" href="/assets/img/favicon/manifest.json"> + <meta name="msapplication-TileColor" content="#ffffff"> + <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> + <meta name="theme-color" content="#ffffff"> + + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> + <link rel='stylesheet' href='/assets/css/mobile.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + <div class='page_name'>MegaFace Dataset</div> + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/about/news">News</a> + </div> + </header> + <div class="content content-dataset"> + + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/ibm_dif/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>MegaFace Dataset</span></div><div class='hero_subdesc'><span class='bgpad'>MegaFace contains 670K identities and 4.7M images +</span></div></div></section><section><h2>MegaFace</h2> +</section><section><div class='right-sidebar'><div class='meta'> + <div class='gray'>Published</div> + <div>2016</div> + </div><div class='meta'> + <div class='gray'>Images</div> + <div>4,753,520 </div> + </div><div class='meta'> + <div class='gray'>Identities</div> + <div>672,057 </div> + </div><div class='meta'> + <div class='gray'>Purpose</div> + <div>face recognition</div> + </div><div class='meta'> + <div class='gray'>Website</div> + <div><a href='http://megaface.cs.washington.edu/' target='_blank' rel='nofollow noopener'>washington.edu</a></div> + </div></div><p>[ page under development ]</p> +</section><section> + <h3>Who used MegaFace Dataset?</h3> + + <p> + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. + </p> + + </section> + +<section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> + <div class="applet" data-payload="{"command": "chart"}"></div> +</section> + +<section class="applet_container"> + <div class="applet" data-payload="{"command": "piechart"}"></div> +</section> + +<section> + + <h3>Information Supply chain</h3> + + <p> + To help understand how MegaFace Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing MegaFace Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. + </p> + + </section> + +<section class="applet_container fullwidth"> + <div class="applet" data-payload="{"command": "map"}"></div> +</section> + +<div class="caption"> + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + </ul> + <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > +</div> + + +<section class="applet_container"> + + <h3>Dataset Citations</h3> + <p> + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>. + </p> + + <div class="applet" data-payload="{"command": "citations"}"></div> +</section><section> + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h2>Supplementary Information</h2> + +</section><section> + + <h4>Cite Our Work</h4> + <p> + + If you find this analysis helpful, please cite our work: + +<pre id="cite-bibtex"> +@online{megapixels, + author = {Harvey, Adam. LaPlace, Jules.}, + title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets}, + year = 2019, + url = {https://megapixels.cc/}, + urldate = {2019-04-18} +}</pre> + + </p> +</section> + + </div> + <footer> + <ul class="footer-left"> + <li><a href="/">MegaPixels.cc</a></li> + <li><a href="/datasets/">Datasets</a></li> + <li><a href="/about/">About</a></li> + <li><a href="/about/news/">News</a></li> + <li><a href="/about/legal/">Legal & Privacy</a></li> + </ul> + <ul class="footer-right"> + <li>MegaPixels ©2017-19 <a href="https://ahprojects.com">Adam R. Harvey</a></li> + <li>Made with support from <a href="https://mozilla.org">Mozilla</a></li> + </ul> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/datasets/ijb_c/index.html b/site/public/datasets/ijb_c/index.html index ccb7d90d..a36fac14 100644 --- a/site/public/datasets/ijb_c/index.html +++ b/site/public/datasets/ijb_c/index.html @@ -76,26 +76,16 @@ <div class='gray'>Website</div> <div><a href='https://www.nist.gov/programs-projects/face-challenges' target='_blank' rel='nofollow noopener'>nist.gov</a></div> </div></div><p>[ page under development ]</p> -<p>The IARPA Janus Benchmark C (IJB–C) is a dataset of web images used for face recognition research and development. The IJB–C dataset contains 3,531 people</p> -<p>Among the target list of 3,531 names are activists, artists, journalists, foreign politicians,</p> +<p>The IARPA Janus Benchmark C (IJB–C) is a dataset of web images used for face recognition research and development. The IJB–C dataset contains 3,531 people from 21,294 images and 3,531 videos. The list of 3,531 names are activists, artists, journalists, foreign politicians, and public speakers.</p> +<p>Key Findings:</p> <ul> -<li>Subjects 3531</li> -<li>Templates: 140739</li> -<li>Genuine Matches: 7819362</li> -<li>Impostor Matches: 39584639</li> -</ul> -<p>Why not include US Soliders instead of activists?</p> -<p>was creted by Nobilis, a United States Government contractor is used to develop software for the US intelligence agencies as part of the IARPA Janus program.</p> -<p>The IARPA Janus program is</p> -<p>these representations must address the challenges of Aging, Pose, Illumination, and Expression (A-PIE) by exploiting all available imagery.</p> -<ul> -<li>metadata annotations were created using crowd annotations</li> -<li>created by Nobilis</li> -<li>used mechanical turk</li> +<li>metadata annotations were created using crowd annotations on Mechanical Turk</li> +<li>The dataset was creatd Nobilis</li> <li>made for intelligence analysts</li> <li>improve performance of face recognition tools</li> <li>by fusing the rich spatial, temporal, and contextual information available from the multiple views captured by today’s "media in the wild"</li> </ul> +<p>The dataset includes Creative Commons images</p> <p>The name list includes</p> <ul> <li>2 videos from CCC<ul> @@ -134,7 +124,7 @@ <p>The first 777 are non-alphabetical. From 777-3531 is alphabetical</p> </section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/ijb_c/assets/ijb_c_montage.jpg' alt=' A visualization of the IJB-C dataset'><div class='caption'> A visualization of the IJB-C dataset</div></div></section><section><h2>Research notes</h2> <p>From original papers: <a href="https://noblis.org/wp-content/uploads/2018/03/icb2018.pdf">https://noblis.org/wp-content/uploads/2018/03/icb2018.pdf</a></p> -<p>Collection for the dataset began by identifying CreativeCommons subject videos, which are often more scarce thanCreative Commons subject images. Search terms that re-sulted in large quantities of person-centric videos (e.g. “in-terview”) were generated and translated into numerous lan-guages including Arabic, Korean, Swahili, and Hindi to in-crease diversity of the subject pool. Certain YouTube userswho upload well-labeled, person-centric videos, such as the World Economic Forum and the International University Sports Federation were also identified. Titles of videos per-taining to these search terms and usernames were scrapedusing the YouTube Data API and translated into English us-ing the Yandex Translate API4. Pattern matching was per-formed to extract potential names of subjects from the trans-lated titles, and these names were searched using the Wiki-data API to verify the subject’s existence and status as a public figure, and to check for Wikimedia Commons im-agery. Age, gender, and geographic region were collectedusing the Wikipedia API.Using the candidate subject names, Creative Commonsimages were scraped from Google and Wikimedia Com-mons, and Creative Commons videos were scraped fromYouTube. After images and videos of the candidate subjectwere identified, AMT Workers were tasked with validat-ing the subject’s presence throughout the video. The AMTWorkers marked segments of the video in which the subjectwas present, and key frames</p> +<p>Collection for the dataset began by identifying CreativeCommons subject videos, which are often more scarce than Creative Commons subject images. Search terms that re-sulted in large quantities of person-centric videos (e.g. “in-terview”) were generated and translated into numerous lan-guages including Arabic, Korean, Swahili, and Hindi to in-crease diversity of the subject pool. Certain YouTube userswho upload well-labeled, person-centric videos, such as the World Economic Forum and the International University Sports Federation were also identified. Titles of videos per-taining to these search terms and usernames were scrapedusing the YouTube Data API and translated into English us-ing the Yandex Translate API4. Pattern matching was per-formed to extract potential names of subjects from the trans-lated titles, and these names were searched using the Wiki-data API to verify the subject’s existence and status as a public figure, and to check for Wikimedia Commons im-agery. Age, gender, and geographic region were collectedusing the Wikipedia API.Using the candidate subject names, Creative Commonsimages were scraped from Google and Wikimedia Com-mons, and Creative Commons videos were scraped fromYouTube. After images and videos of the candidate subjectwere identified, AMT Workers were tasked with validat-ing the subject’s presence throughout the video. The AMTWorkers marked segments of the video in which the subjectwas present, and key frames</p> <p>IARPA funds Italian researcher <a href="https://www.micc.unifi.it/projects/glaivejanus/">https://www.micc.unifi.it/projects/glaivejanus/</a></p> </section><section> <h3>Who used IJB-C?</h3> diff --git a/site/public/datasets/megaface/index.html b/site/public/datasets/megaface/index.html new file mode 100644 index 00000000..33abf6c1 --- /dev/null +++ b/site/public/datasets/megaface/index.html @@ -0,0 +1,172 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels: MegaFace</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="MegaFace Dataset" /> + <meta property="og:title" content="MegaPixels: MegaFace"/> + <meta property="og:type" content="website"/> + <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/> + <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/megaface/assets/background.jpg" /> + <meta property="og:url" content="https://megapixels.cc/datasets/megaface/"/> + <meta property="og:site_name" content="MegaPixels" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/> + <meta name="apple-mobile-web-app-status-bar-style" content="black"> + <meta name="apple-mobile-web-app-capable" content="yes"> + + <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png"> + <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png"> + <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png"> + <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png"> + <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png"> + <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png"> + <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png"> + <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png"> + <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png"> + <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png"> + <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png"> + <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png"> + <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png"> + <link rel="manifest" href="/assets/img/favicon/manifest.json"> + <meta name="msapplication-TileColor" content="#ffffff"> + <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> + <meta name="theme-color" content="#ffffff"> + + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> + <link rel='stylesheet' href='/assets/css/mobile.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + <div class='page_name'>MegaFace Dataset</div> + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/about/news">News</a> + </div> + </header> + <div class="content content-dataset"> + + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/megaface/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>MegaFace Dataset</span></div><div class='hero_subdesc'><span class='bgpad'>MegaFace contains 670K identities and 4.7M images +</span></div></div></section><section><h2>MegaFace</h2> +</section><section><div class='right-sidebar'><div class='meta'> + <div class='gray'>Published</div> + <div>2016</div> + </div><div class='meta'> + <div class='gray'>Images</div> + <div>4,753,520 </div> + </div><div class='meta'> + <div class='gray'>Identities</div> + <div>672,057 </div> + </div><div class='meta'> + <div class='gray'>Purpose</div> + <div>face recognition</div> + </div><div class='meta'> + <div class='gray'>Website</div> + <div><a href='http://megaface.cs.washington.edu/' target='_blank' rel='nofollow noopener'>washington.edu</a></div> + </div></div><p>[ page under development ]</p> +</section><section> + <h3>Who used MegaFace Dataset?</h3> + + <p> + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. + </p> + + </section> + +<section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> + <div class="applet" data-payload="{"command": "chart"}"></div> +</section> + +<section class="applet_container"> + <div class="applet" data-payload="{"command": "piechart"}"></div> +</section> + +<section> + + <h3>Information Supply chain</h3> + + <p> + To help understand how MegaFace Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing MegaFace Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. + </p> + + </section> + +<section class="applet_container fullwidth"> + <div class="applet" data-payload="{"command": "map"}"></div> +</section> + +<div class="caption"> + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + </ul> + <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > +</div> + + +<section class="applet_container"> + + <h3>Dataset Citations</h3> + <p> + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>. + </p> + + <div class="applet" data-payload="{"command": "citations"}"></div> +</section><section> + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h2>Supplementary Information</h2> + +</section><section> + + <h4>Cite Our Work</h4> + <p> + + If you find this analysis helpful, please cite our work: + +<pre id="cite-bibtex"> +@online{megapixels, + author = {Harvey, Adam. LaPlace, Jules.}, + title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets}, + year = 2019, + url = {https://megapixels.cc/}, + urldate = {2019-04-18} +}</pre> + + </p> +</section> + + </div> + <footer> + <ul class="footer-left"> + <li><a href="/">MegaPixels.cc</a></li> + <li><a href="/datasets/">Datasets</a></li> + <li><a href="/about/">About</a></li> + <li><a href="/about/news/">News</a></li> + <li><a href="/about/legal/">Legal & Privacy</a></li> + </ul> + <ul class="footer-right"> + <li>MegaPixels ©2017-19 <a href="https://ahprojects.com">Adam R. Harvey</a></li> + <li>Made with support from <a href="https://mozilla.org">Mozilla</a></li> + </ul> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/datasets/msceleb/index.html b/site/public/datasets/msceleb/index.html index 2e326416..7109cc9b 100644 --- a/site/public/datasets/msceleb/index.html +++ b/site/public/datasets/msceleb/index.html @@ -206,7 +206,7 @@ <p>Earlier in 2019, Microsoft President and Chief Legal Officer <a href="https://blogs.microsoft.com/on-the-issues/2018/12/06/facial-recognition-its-time-for-action/">Brad Smith</a> called for the governmental regulation of face recognition, citing the potential for misuse, a rare admission that Microsoft's surveillance-driven business model had lost its bearing. More recently Smith also <a href="https://www.reuters.com/article/us-microsoft-ai/microsoft-turned-down-facial-recognition-sales-on-human-rights-concerns-idUSKCN1RS2FV">announced</a> that Microsoft would seemingly take a stand against such potential misuse, and had decided to not sell face recognition to an unnamed United States agency, citing a lack of accuracy. In effect, Microsoft's face recognition software was not suitable to be used on minorities because it was trained mostly on white male faces.</p> <p>What the decision to block the sale announces is not so much that Microsoft had upgraded their ethics policy, but that Microsoft publicly acknowledged it can't sell a data-driven product without data. In other words, Microsoft can't sell face recognition if they don't have enough face training data to build it.</p> <p>Until now, that data has been freely harvested from the Internet and packaged in training sets like MS Celeb, which are overwhelmingly <a href="https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html">white</a> and <a href="https://gendershades.org">male</a>. Without balanced data, facial recognition contains blind spots. But without the large-scale datasets like MS Celeb, the powerful yet inaccurate facial recognition services like Microsoft Azure Cognitive would be even less usable.</p> -</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/msceleb_montage.jpg' alt=' A visualization of 2,000 of the 100,000 identities included in the MS-Celeb-1M dataset distributed by Microsoft Research. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> A visualization of 2,000 of the 100,000 identities included in the MS-Celeb-1M dataset distributed by Microsoft Research. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section><p>Microsoft didn't only create MS Celeb for other researchers to use, they also used it internally. In a publicly available 2017 Microsoft Research project called "<a href="https://www.microsoft.com/en-us/research/publication/one-shot-face-recognition-promoting-underrepresented-classes/">One-shot Face Recognition by Promoting Underrepresented Classes</a>," Microsoft used the MS Celeb face dataset to build their algorithms and advertise the results. Interestingly, Microsoft's <a href="https://www.microsoft.com/en-us/research/publication/one-shot-face-recognition-promoting-underrepresented-classes/">corporate version</a> of the paper does not mention they used the MS Celeb datset, but the <a href="https://www.semanticscholar.org/paper/One-shot-Face-Recognition-by-Promoting-Classes-Guo/6cacda04a541d251e8221d70ac61fda88fb61a70">open-access version</a> published on arxiv.org does. It states that Microsoft Research analyzed their algorithms using "the MS-Celeb-1M low-shot learning benchmark task."<a class="footnote_shim" name="[^one_shot]_1"> </a><a href="#[^one_shot]" class="footnote" title="Footnote 5">5</a></p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/msceleb_montage.jpg' alt=' A visualization of 2,000 of the 100,000 identities included in the MS-Celeb-1M dataset distributed by Microsoft Research. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> A visualization of 2,000 of the 100,000 identities included in the MS-Celeb-1M dataset distributed by Microsoft Research. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section><p>Microsoft didn't only create MS Celeb for other researchers to use, they also used it internally. In a publicly available 2017 Microsoft Research project called "<a href="https://www.microsoft.com/en-us/research/publication/one-shot-face-recognition-promoting-underrepresented-classes/">One-shot Face Recognition by Promoting Underrepresented Classes</a>," Microsoft used the MS Celeb face dataset to build their algorithms and advertise the results. Interestingly, Microsoft's <a href="https://www.microsoft.com/en-us/research/publication/one-shot-face-recognition-promoting-underrepresented-classes/">corporate version</a> of the paper does not mention they used the MS Celeb datset, but the <a href="https://www.semanticscholar.org/paper/One-shot-Face-Recognition-by-Promoting-Classes-Guo/6cacda04a541d251e8221d70ac61fda88fb61a70">open-access version</a> published on arxiv.org does. It states that Microsoft analyzed their algorithms "on the MS-Celeb-1M low-shot learning <a href="https://www.microsoft.com/en-us/research/publication/ms-celeb-1m-dataset-benchmark-large-scale-face-recognition-2/">benchmark task</a>"<a class="footnote_shim" name="[^one_shot]_1"> </a><a href="#[^one_shot]" class="footnote" title="Footnote 5">5</a>, which is described as a refined version of the original MS-Celeb-1M face dataset.</p> <p>Typically researchers will phrase this differently and say that they only use a dataset to validate their algorithm. But validation data can't be easily separated from the training process. To develop a neural network model, image training datasets are split into three parts: train, test, and validation. Training data is used to fit a model, and the validation and test data are used to provide feedback about the hyperparameters, biases, and outputs. In reality, test and validation data steers and influences the final results of neural networks.</p> <h2>Runaway Data</h2> <p>Despite the recent termination of the <a href="https://msceleb.org">msceleb.org</a> website, the dataset still exists in several repositories on GitHub, the hard drives of countless researchers, and will likely continue to be used in research projects around the world.</p> diff --git a/site/public/datasets/oxford_town_centre/index.html b/site/public/datasets/oxford_town_centre/index.html index 6f6bd70b..40f8bbc6 100644 --- a/site/public/datasets/oxford_town_centre/index.html +++ b/site/public/datasets/oxford_town_centre/index.html @@ -141,9 +141,9 @@ <h2>Supplementary Information</h2> </section><section><h3>Location</h3> -<p>The street location of the camera used for the Oxford Town Centre dataset was confirmed by matching the road, benches, and store signs <a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">source</a>. At that location, two public CCTV cameras exist mounted on the side of the Northgate House building at 13-20 Cornmarket St. Because of the lower camera's mounting pole directionality, a view from a private camera in the building across the street can be ruled out because it would have to show more of silhouette of the lower camera's mounting pole. Two options remain: either the public CCTV camera mounted to the side of the building was used or the researchers mounted their own camera to the side of the building in the same location. Because the researchers used many other existing public CCTV cameras for their <a href="http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html">research projects</a> it is increasingly likely that they would also be able to access to this camera.</p> -<p>Next, to discredit the theory that this public CCTV is only seen pointing the other way in Google Street View images, at least one public photo shows the upper CCTV camera <a href="https://www.oxcivicsoc.org.uk/northgate-house-cornmarket/">pointing in the same direction</a> as the Oxford Town Centre dataset, proving the camera can and has been rotated before.</p> -<p>As for the capture date, the text on the storefront display shows a sale happening from December 2nd – 7th indicating the capture date was between or just before those dates. The capture year is either 2008 or 2007, since prior to 2007 the Carphone Warehouse (<a href="https://www.flickr.com/photos/katieportwin/364492063/in/photolist-4meWFE-yd7rw-yd7X6-5sDHuc-yd7DN-59CpEK-5GoHAc-yd7Zh-3G2uJP-yd7US-5GomQH-4peYpq-4bAEwm-PALEr-58RkAp-5pHEkf-5v7fGq-4q1J9W-4kypQ2-5KX2Eu-yd7MV-yd7p6-4McgWb-5pJ55w-24N9gj-37u9LK-4FVcKQ-a81Enz-5qNhTG-59CrMZ-2yuwYM-5oagH5-59CdsP-4FVcKN-4PdxhC-5Lhr2j-2PAd2d-5hAwvk-zsQSG-4Cdr4F-3dUPEi-9B1RZ6-2hv5NY-4G5qwP-HCHBW-4JiuC4-4Pdr9Y-584aEV-2GYBEc-HCPkp/">photo</a>, <a href="http://www.oxfordhistory.org.uk/cornmarket/west/47_51.html">history</a>) did not exist at this location. Since the sweaters in the GAP window display are more similar to those in a <a href="web.archive.org/web/20081201002524/http://www.gap.com/">GAP website snapshot</a> from November 2007, our guess is that the footage was obtained during late November or early December 2007. The lack of street vendors and slight waste residue near the bench suggests that it was probably a weekday after rubbish removal.</p> +<p>The street location of the camera used for the Oxford Town Centre dataset was confirmed by matching the road, benches, and store signs <a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">source</a>. At that location, two public CCTV cameras exist mounted on the side of the Northgate House building at 13-20 Cornmarket St. The upper camera, a public CCTV camera installed for security, is most likely the camera used to create this dataset.</p> +<p>The camera can be seen pointing in the same direction as the dataset's view in this <a href="https://www.oxcivicsoc.org.uk/northgate-house-cornmarket/">public image</a>, and the researchers used other existing public CCTV cameras for additional <a href="http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html">research projects</a> increasing the likelihood that they could have had access to this camera.</p> +<p>The capture date is estimated to be during late November or early December in 2007 or 2008. The text on the storefront display shows a sale happening from December 2nd – 7th indicating the capture date was likely around this time. Prior to 2007 the Carphone Warehouse (<a href="https://www.flickr.com/photos/katieportwin/364492063/in/photolist-4meWFE-yd7rw-yd7X6-5sDHuc-yd7DN-59CpEK-5GoHAc-yd7Zh-3G2uJP-yd7US-5GomQH-4peYpq-4bAEwm-PALEr-58RkAp-5pHEkf-5v7fGq-4q1J9W-4kypQ2-5KX2Eu-yd7MV-yd7p6-4McgWb-5pJ55w-24N9gj-37u9LK-4FVcKQ-a81Enz-5qNhTG-59CrMZ-2yuwYM-5oagH5-59CdsP-4FVcKN-4PdxhC-5Lhr2j-2PAd2d-5hAwvk-zsQSG-4Cdr4F-3dUPEi-9B1RZ6-2hv5NY-4G5qwP-HCHBW-4JiuC4-4Pdr9Y-584aEV-2GYBEc-HCPkp/">photo</a>, <a href="http://www.oxfordhistory.org.uk/cornmarket/west/47_51.html">history</a>) did not exist at this location. And since the sweaters in the GAP window display are more similar to those in a <a href="web.archive.org/web/20081201002524/http://www.gap.com/">GAP website snapshot</a> from November 2007, it was probably recorded in 2007. The slight waste residue near the bench and the lack street vendors that typically appear on a weekend, suggest that it was perhaps a weekday after rubbish removal.</p> </section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/oxford_town_centre_cctv.jpg' alt=' Footage from this public CCTV camera was used to create the Oxford Town Centre dataset. Image sources: Google Street View (<a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">map</a>)'><div class='caption'> Footage from this public CCTV camera was used to create the Oxford Town Centre dataset. Image sources: Google Street View (<a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">map</a>)</div></div></section><section><div class='columns columns-'><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/oxford_town_centre_sal_body.jpg' alt=' Heat map body visualization of the pedestrians detected in the Oxford Town Centre dataset © megapixels.cc'><div class='caption'> Heat map body visualization of the pedestrians detected in the Oxford Town Centre dataset © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/oxford_town_centre_sal_face.jpg' alt=' Heat map face visualization of the pedestrians detected in the Oxford Town Centre dataset © megapixels.cc'><div class='caption'> Heat map face visualization of the pedestrians detected in the Oxford Town Centre dataset © megapixels.cc</div></div></section></div></section><section> <h4>Cite Our Work</h4> diff --git a/site/public/datasets/who_goes_there/index.html b/site/public/datasets/who_goes_there/index.html new file mode 100644 index 00000000..3db77ff7 --- /dev/null +++ b/site/public/datasets/who_goes_there/index.html @@ -0,0 +1,157 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels: Who Goes There Dataset</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="Who Goes There Dataset" /> + <meta property="og:title" content="MegaPixels: Who Goes There Dataset"/> + <meta property="og:type" content="website"/> + <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/> + <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/who_goes_there/assets/background.jpg" /> + <meta property="og:url" content="https://megapixels.cc/datasets/who_goes_there/"/> + <meta property="og:site_name" content="MegaPixels" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/> + <meta name="apple-mobile-web-app-status-bar-style" content="black"> + <meta name="apple-mobile-web-app-capable" content="yes"> + + <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png"> + <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png"> + <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png"> + <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png"> + <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png"> + <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png"> + <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png"> + <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png"> + <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png"> + <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png"> + <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png"> + <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png"> + <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png"> + <link rel="manifest" href="/assets/img/favicon/manifest.json"> + <meta name="msapplication-TileColor" content="#ffffff"> + <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> + <meta name="theme-color" content="#ffffff"> + + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> + <link rel='stylesheet' href='/assets/css/mobile.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + <div class='page_name'>Who Goes There Dataset</div> + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/about/news">News</a> + </div> + </header> + <div class="content content-dataset"> + + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/who_goes_there/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Who Goes There Dataset</span></div><div class='hero_subdesc'><span class='bgpad'>Who Goes There (page under development) +</span></div></div></section><section><h2>Who Goes There</h2> +</section><section><div class='right-sidebar'></div><p>[ page under development ]</p> +</section><section> + <h3>Who used Who Goes There Dataset?</h3> + + <p> + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. + </p> + + </section> + +<section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> + <div class="applet" data-payload="{"command": "chart"}"></div> +</section> + +<section class="applet_container"> + <div class="applet" data-payload="{"command": "piechart"}"></div> +</section> + +<section> + + <h3>Information Supply chain</h3> + + <p> + To help understand how Who Goes There Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing WhoGoesThere was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. + </p> + + </section> + +<section class="applet_container fullwidth"> + <div class="applet" data-payload="{"command": "map"}"></div> +</section> + +<div class="caption"> + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + </ul> + <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > +</div> + + +<section class="applet_container"> + + <h3>Dataset Citations</h3> + <p> + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>. + </p> + + <div class="applet" data-payload="{"command": "citations"}"></div> +</section><section> + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h2>Supplementary Information</h2> + +</section><section> + + <h4>Cite Our Work</h4> + <p> + + If you find this analysis helpful, please cite our work: + +<pre id="cite-bibtex"> +@online{megapixels, + author = {Harvey, Adam. LaPlace, Jules.}, + title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets}, + year = 2019, + url = {https://megapixels.cc/}, + urldate = {2019-04-18} +}</pre> + + </p> +</section> + + </div> + <footer> + <ul class="footer-left"> + <li><a href="/">MegaPixels.cc</a></li> + <li><a href="/datasets/">Datasets</a></li> + <li><a href="/about/">About</a></li> + <li><a href="/about/news/">News</a></li> + <li><a href="/about/legal/">Legal & Privacy</a></li> + </ul> + <ul class="footer-right"> + <li>MegaPixels ©2017-19 <a href="https://ahprojects.com">Adam R. Harvey</a></li> + <li>Made with support from <a href="https://mozilla.org">Mozilla</a></li> + </ul> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/research/00_introduction/index.html b/site/public/research/00_introduction/index.html index e7f14be5..bfd048e9 100644 --- a/site/public/research/00_introduction/index.html +++ b/site/public/research/00_introduction/index.html @@ -1,11 +1,11 @@ <!doctype html> <html> <head> - <title>MegaPixels: 00: Introduction</title> + <title>MegaPixels: Introducing MegaPixels</title> <meta charset="utf-8" /> - <meta name="author" content="Megapixels" /> + <meta name="author" content="Adam Harvey" /> <meta name="description" content="Introduction to Megapixels" /> - <meta property="og:title" content="MegaPixels: 00: Introduction"/> + <meta property="og:title" content="MegaPixels: Introducing MegaPixels"/> <meta property="og:type" content="website"/> <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/> <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg" /> @@ -53,10 +53,10 @@ <a href="/about/news">News</a> </div> </header> - <div class="content content-"> + <div class="content content-dataset"> <section> - <h1>00: Introduction</h1> + <h1>Introducing MegaPixels</h1> <div class='meta'> <div> <div class='gray'>Posted</div> @@ -64,65 +64,27 @@ </div> <div> <div class='gray'>By</div> - <div>Megapixels</div> + <div>Adam Harvey</div> </div> </div> </section> - <section><div class='meta'><div><div class='gray'>Posted</div><div>Dec. 15</div></div><div><div class='gray'>Author</div><div>Adam Harvey</div></div></div><section><section><p>Facial recognition is a scam.</p> -<p>It's extractive and damaging industry that's built on the biometric backbone of the Internet.</p> -<p>During the last 20 years commericial, academic, and governmental agencies have promoted the false dream of a future with face recognition. This essay debunks the popular myth that such a thing ever existed.</p> -<p>There is no such thing as <em>face recognition</em>. For the last 20 years, government agencies, commercial organizations, and academic institutions have played the public as a fool, selling a roadmap of the future that simply does not exist. Facial recognition, as it is currently defined, promoted, and sold to the public, government, and commercial sector is a scam.</p> -<p>Committed to developing robust solutions with superhuman accuracy, the industry has repeatedly undermined itself by never actually developing anything close to "face recognition".</p> -<p>There is only biased feature vector clustering and probabilistic thresholding.</p> -<h2>If you don't have data, you don't have a product.</h2> -<p>Yesterday's <a href="https://www.reuters.com/article/us-microsoft-ai/microsoft-turned-down-facial-recognition-sales-on-human-rights-concerns-idUSKCN1RS2FV">decision</a> by Brad Smith, CEO of Microsoft, to not sell facial recognition to a US law enforcement agency is not an about face by Microsoft to become more humane, it's simply a perfect illustration of the value of training data. Without data, you don't have a product to sell. Microsoft realized that doesn't have enough training data to sell</p> -<h2>Cost of Faces</h2> -<p>Univ Houston paid subjects $20/ea -<a href="http://web.archive.org/web/20170925053724/http://cbl.uh.edu/index.php/pages/research/collecting_facial_images_from_multiples_in_texas">http://web.archive.org/web/20170925053724/http://cbl.uh.edu/index.php/pages/research/collecting_facial_images_from_multiples_in_texas</a></p> -<p>FaceMeta facedataset.com</p> + <section><p>Face recognition has become the focal point for ...</p> +<p>Add 68pt landmarks animation</p> +<p>But biometric currency is ...</p> +<p>Add rotation 3D head</p> +<p>Inflationary...</p> +<p>Add Theresea May 3D</p> +<p>(comission for CPDP)</p> +<p>Add info from the AI Traps talk</p> <ul> -<li>BASIC: 15,000 images for $6,000 USD</li> -<li>RECOMMENDED: 50,000 images for $12,000 USD</li> -<li>ADVANCED: 100,000 images for $18,000 USD*</li> +<li>Posted: Dec. 15</li> +<li>Author: Adam Harvey</li> </ul> -<h2>Use Your Own Biometrics First</h2> -<p>If researchers want faces, they should take selfies and create their own dataset. If researchers want images of families to build surveillance software, they should use and distibute their own family portraits.</p> -<h3>Motivation</h3> -<p>Ever since government agencies began developing face recognition in the early 1960's, datasets of face images have always been central to developing and validating face recognition technologies. Today, these datasets no longer originate in labs, but instead from family photo albums posted on photo sharing sites, surveillance camera footage from college campuses, search engine queries for celebrities, cafe livestreams, or <a href="https://www.theverge.com/2017/8/22/16180080/transgender-youtubers-ai-facial-recognition-dataset">videos on YouTube</a>.</p> -<p>During the last year, hundreds of these facial analysis datasets created "in the wild" have been collected to understand how they contribute to a global supply chain of biometric data that is powering the global facial recognition industry.</p> -<p>While many of these datasets include public figures such as politicians, athletes, and actors; they also include many non-public figures: digital activists, students, pedestrians, and semi-private shared photo albums are all considered "in the wild" and fair game for research projects. Some images are used with creative commons licenses, yet others were taken in unconstrained scenarios without awareness or consent. At first glance it appears many of the datasets were created for seemingly harmless academic research, but when examined further it becomes clear that they're also used by foreign defense agencies.</p> -<p>The MegaPixels site is based on an earlier <a href="https://ahprojects.com/megapixels-glassroom">installation</a> (also supported by Mozilla) at the <a href="https://theglassroom.org/">Tactical Tech Glassroom</a> in London in 2017; and a commission from the Elevate arts festival curated by Berit Gilma about pedestrian recognition datasets in 2018, and research during <a href="https://cvdazzle.com">CV Dazzle</a> from 2010-2015. Through the many prototypes, conversations, pitches, PDFs, and false starts this project has endured during the last 5 years, it eventually evolved into something much different than originally imagined. Now, as datasets become increasingly influential in shaping the computational future, it's clear that they must be critically analyzed to understand the biases, shortcomings, funding sources, and contributions to the surveillance industry. However, it's misguided to only criticize these datasets for their flaws without also praising their contribution to society. Without publicly available facial analysis datasets there would be less public discourse, less open-source software, and less peer-reviewed research. Public datasets can indeed become a vital public good for the information economy but as this projects aims to illustrate, many ethical questions arise about consent, intellectual property, surveillance, and privacy.</p> -<!-- who provided funding to research, development this project understand the role these datasets have played in creating biometric surveillance technologies. --> - - - - -<p>Ever since the first computational facial recognition research project by the CIA in the early 1960s, data has always played a vital role in the development of our biometric future. Without facial recognition datasets there would be no facial recognition. Datasets are an indispensable part of any artificial intelligence system because, as Geoffrey Hinton points out:</p> -<blockquote><p>Our relationship to computers has changed. Instead of programming them, we now show them and they figure it out. - <a href="https://www.youtube.com/watch?v=-eyhCTvrEtE">Geoffrey Hinton</a></p> -</blockquote> -<p>Algorithms learn from datasets. And we program algorithms by building datasets. But datasets aren't like code. There's no programming language made of data except for the data itself.</p> -<p>Ignore content below these lines</p> -<p>It was the early 2000s. Face recognition was new and no one seemed sure exactly how well it was going to perform in practice. In theory, face recognition was poised to be a game changer, a force multiplier, a strategic military advantage, a way to make cities safer and to secure borders. This was the future John Ashcroft demanded with the Total Information Awareness act of the 2003 and that spooks had dreamed of for decades. It was a future that academics at Carnegie Mellon Universtiy and Colorado State University would help build. It was also a future that celebrities would play a significant role in building. And to the surprise of ordinary Internet users like myself and perhaps you, it was a future that millions of Internet users would unwittingly play role in creating.</p> -<p>Now the future has arrived and it doesn't make sense. Facial recognition works yet it doesn't actually work. Facial recognition is cheap and accessible but also expensive and out of control. Facial recognition research has achieved headline grabbing superhuman accuracies over 99.9% yet facial recognition is also dangerously inaccurate. During a trial installation at Sudkreuz station in Berlin in 2018, 20% of the matches were wrong, a number so low that it should not have any connection to law enforcement or justice. And in London, the Metropolitan police had been using facial recognition software that mistakenly identified an alarming 98% of people as criminals <sup class="footnote-ref" id="fnref-met_police"><a href="#fn-met_police">1</a></sup>, which perhaps is a crime itself.</p> -<p>MegaPixels is an online art project that explores the history of facial recognition from the perspective of datasets. To paraphrase the artist Trevor Paglen, whoever controls the dataset controls the meaning. MegaPixels aims to unravel the meanings behind the data and expose the darker corners of the biometric industry that have contributed to its growth. MegaPixels does not start with a conclusion, a moralistic slant, or a</p> -<p>Whether or not to build facial recognition was a question that can no longer be asked. As an outspoken critic of face recognition I've developed, and hopefully furthered, my understanding during the last 10 years I've spent working with computer vision. Though I initially disagreed, I've come to see technocratic perspective as a non-negotiable reality. As Oren (nytimes article) wrote in NYT Op-Ed "the horse is out of the barn" and the only thing we can do collectively or individually is to steer towards the least worse outcome. Computational communication has entered a new era and it's both exciting and frightening to explore the potentials and opportunities. In 1997 getting access to 1 teraFLOPS of computational power would have cost you $55 million and required a strategic partnership with the Department of Defense. At the time of writing, anyone can rent 1 teraFLOPS on a cloud GPU marketplace for less than $1/day. <sup class="footnote-ref" id="fnref-asci_option_red"><a href="#fn-asci_option_red">2</a></sup>.</p> -<p>I hope that this project will illuminate the darker areas of strange world of facial recognition that have not yet received attention and encourage discourse in academic, industry, and . By no means do I believe discourse can save the day. Nor do I think creating artwork can. In fact, I'm not exactly sure what the outcome of this project will be. The project is not so much what I publish here but what happens after. This entire project is only a prologue.</p> -<p>As McLuhan wrote, "You can't have a static, fixed position in the electric age". And in our hyper-connected age of mass surveillance, artificial intelligece, and unevenly distributed virtual futures the most irrational thing to be is rational. Increasingly the world is becoming a contradiction where people use surveillance to protest surveillance, use</p> -<p>Like many projects, MegaPixels had spent years meandering between formats, unfeasible budgets, and was generally too niche of a subject. The basic idea for this project, as proposed to the original <a href="https://tacticaltech.org/projects/the-glass-room-nyc/">Glass Room</a> installation in 2016 in NYC, was to build an interactive mirror that showed people if they had been included in the <a href="/datasets/lfw">LFW</a> facial recognition dataset. The idea was based on my reaction to all the datasets I'd come across during research for the CV Dazzle project. I'd noticed strange datasets created for training and testing face detection algorithms. Most were created in labratory settings and their interpretation of face data was very strict.</p> -<h3>for other post</h3> -<p>It was the early 2000s. Face recognition was new and no one seemed sure how well it was going to perform in practice. In theory, face recognition was poised to be a game changer, a force multiplier, a strategic military advantage, a way to make cities safer and to secure the borders. It was the future that John Ashcroft demanded with the Total Information Awareness act of the 2003. It was a future that academics helped build. It was a future that celebrities helped build. And it was a future that</p> -<p>A decade earlier the Department of Homeland Security and the Counterdrug Technology Development Program Office initated a feasibilty study called FERET (FacE REcognition Technology) to "develop automatic face recognition capabilities that could be employed to assist security, intelligence, and law enforcement personnel in the performance of their duties [^feret_website]."</p> -<p>One problem with FERET dataset was that the photos were in controlled settings. For face recognition to work it would have to be used in uncontrolled settings. Even newer datasets such as the Multi-PIE (Pose, Illumination, and Expression) from Carnegie Mellon University included only indoor photos of cooperative subjects. Not only were the photos completely unrealistic, CMU's Multi-Pie included only 18 individuals and cost $500 for academic use [^cmu_multipie_cost], took years to create, and required consent from every participant.</p> -<h2>Add progressive gan of FERET</h2> -<div class="footnotes"> -<hr> -<ol><li id="fn-met_police"><p>Sharman, Jon. "Metropolitan Police's facial recognition technology 98% inaccurate, figures show". 2018. <a href="https://www.independent.co.uk/news/uk/home-news/met-police-facial-recognition-success-south-wales-trial-home-office-false-positive-a8345036.html">https://www.independent.co.uk/news/uk/home-news/met-police-facial-recognition-success-south-wales-trial-home-office-false-positive-a8345036.html</a><a href="#fnref-met_police" class="footnote">↩</a></p></li> -<li id="fn-asci_option_red"><p>Calle, Dan. "Supercomptuers". 1997. <a href="http://ei.cs.vt.edu/~history/SUPERCOM.Calle.HTML">http://ei.cs.vt.edu/~history/SUPERCOM.Calle.HTML</a><a href="#fnref-asci_option_red" class="footnote">↩</a></p></li> -</ol> -</div> -</section> +</section><section class='applet_container'><div class='applet' data-payload='{"command": "load_file /site/research/00_introduction/assets/summary_countries_top.csv", "fields": ["country, Xcitations"]}'></div></section><section><p>Paragraph text to test css formatting. Paragraph text to test css formatting. Paragraph text to test css formatting. Paragraph text to test css formatting. Paragraph text to test css formatting.</p> +<p>[ page under development ]</p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/00_introduction/assets/test.png' alt=' This is the caption'><div class='caption'> This is the caption</div></div></section> </div> <footer> diff --git a/site/public/research/01_munich_security_conference/index.html b/site/public/research/01_munich_security_conference/index.html new file mode 100644 index 00000000..0598b1eb --- /dev/null +++ b/site/public/research/01_munich_security_conference/index.html @@ -0,0 +1,94 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels: Transnational Data Analysis of Publicly Available Face Recognition Training Datasets</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="Transnational Data Analysis of Publicly Available Face Recognition Training Datasets" /> + <meta property="og:title" content="MegaPixels: Transnational Data Analysis of Publicly Available Face Recognition Training Datasets"/> + <meta property="og:type" content="website"/> + <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/> + <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg" /> + <meta property="og:url" content="https://megapixels.cc/research/01_munich_security_conference/"/> + <meta property="og:site_name" content="MegaPixels" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/> + <meta name="apple-mobile-web-app-status-bar-style" content="black"> + <meta name="apple-mobile-web-app-capable" content="yes"> + + <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png"> + <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png"> + <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png"> + <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png"> + <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png"> + <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png"> + <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png"> + <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png"> + <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png"> + <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png"> + <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png"> + <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png"> + <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png"> + <link rel="manifest" href="/assets/img/favicon/manifest.json"> + <meta name="msapplication-TileColor" content="#ffffff"> + <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> + <meta name="theme-color" content="#ffffff"> + + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> + <link rel='stylesheet' href='/assets/css/mobile.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/about/news">News</a> + </div> + </header> + <div class="content content-"> + + <section> + <h1>Transnational Data Analysis of Publicly Available Face Recognition Training Datasets</h1> + <div class='meta'> + <div> + <div class='gray'>Posted</div> + <div>2018-12-15</div> + </div> + <div> + <div class='gray'>By</div> + <div>Adam Harvey</div> + </div> + + </div> + </section> + + <section><p>Add subtitle</p> +<h2>Transnational Data Analysis of Publicly Available Face Recognition Training Datasets</h2> +</section><section class='applet_container'><div class='applet' data-payload='{"command": "load_file /site/research/msc/assets/embassy_counts_public.csv", "fields": ["Name, Images, Year, Gender, Description, URL"]}'></div></section> + + </div> + <footer> + <ul class="footer-left"> + <li><a href="/">MegaPixels.cc</a></li> + <li><a href="/datasets/">Datasets</a></li> + <li><a href="/about/">About</a></li> + <li><a href="/about/news/">News</a></li> + <li><a href="/about/legal/">Legal & Privacy</a></li> + </ul> + <ul class="footer-right"> + <li>MegaPixels ©2017-19 <a href="https://ahprojects.com">Adam R. Harvey</a></li> + <li>Made with support from <a href="https://mozilla.org">Mozilla</a></li> + </ul> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/research/02_what_computers_can_see/index.html b/site/public/research/02_what_computers_can_see/index.html index 67dcbb5e..907b73d6 100644 --- a/site/public/research/02_what_computers_can_see/index.html +++ b/site/public/research/02_what_computers_can_see/index.html @@ -70,7 +70,21 @@ </div> </section> - <section><p>A list of 100 things computer vision can see, eg:</p> + <section><p>Rosalind Picard on Affective Computing Podcast with Lex Fridman</p> +<ul> +<li>we can read with an ordinary camera on your phone, from a neutral face if</li> +<li>your heart is racing</li> +<li>if your breating is becoming irregular and showing signs of stress</li> +<li>how your heart rate variability power is changing even when your heart is not necessarily accelerating</li> +<li>we can tell things about your stress even if you have a blank face</li> +</ul> +<p>in emotion studies</p> +<ul> +<li>when participants use smartphone and multiple data types are collected to understand patterns of life can predict tomorrow's mood</li> +<li>get best results </li> +<li>better than 80% accurate at predicting tomorrow's mood levels</li> +</ul> +<p>A list of 100 things computer vision can see, eg:</p> <ul> <li>age, race, gender, ancestral origin, body mass index</li> <li>eye color, hair color, facial hair, glasses</li> @@ -84,7 +98,7 @@ <h2>From SenseTime paper</h2> <p>Exploring Disentangled Feature Representation Beyond Face Identification</p> <p>From <a href="https://arxiv.org/pdf/1804.03487.pdf">https://arxiv.org/pdf/1804.03487.pdf</a> -The attribute IDs from 1 to 40 corre-spond to: ‘5 o Clock Shadow’, ‘Arched Eyebrows’, ‘Attrac-tive’, ‘Bags Under Eyes’, ‘Bald’, ‘Bangs’, ‘Big Lips’, ‘BigNose’, ‘Black Hair’, ‘Blond Hair’, ‘Blurry’, ‘Brown Hair’,‘Bushy Eyebrows’, ‘Chubby’, ‘Double Chin’, ‘Eyeglasses’,‘Goatee’, ‘Gray Hair’, ‘Heavy Makeup’, ‘High Cheek-bones’, ‘Male’, ‘Mouth Slightly Open’, ‘Mustache’, ‘Nar-row Eyes’, ‘No Beard’, ‘Oval Face’, ‘Pale Skin’, ‘PointyNose’, ‘Receding Hairline’, ‘Rosy Cheeks’, ‘Sideburns’,‘Smiling’, ‘Straight Hair’, ‘Wavy Hair’, ‘Wearing Ear-rings’, ‘Wearing Hat’, ‘Wearing Lipstick’, ‘Wearing Neck-lace’, ‘Wearing Necktie’ and ‘Young’. It’</p> +The attribute IDs from 1 to 40 corre-spond to: ‘5 o Clock Shadow’, ‘Arched Eyebrows’, ‘Attractive’, ‘Bags Under Eyes’, ‘Bald’, ‘Bangs’, ‘Big Lips’, ‘BigNose’, ‘Black Hair’, ‘Blond Hair’, ‘Blurry’, ‘Brown Hair’,‘Bushy Eyebrows’, ‘Chubby’, ‘Double Chin’, ‘Eyeglasses’,‘Goatee’, ‘Gray Hair’, ‘Heavy Makeup’, ‘High Cheek-bones’, ‘Male’, ‘Mouth Slightly Open’, ‘Mustache’, ‘Nar-row Eyes’, ‘No Beard’, ‘Oval Face’, ‘Pale Skin’, ‘PointyNose’, ‘Receding Hairline’, ‘Rosy Cheeks’, ‘Sideburns’,‘Smiling’, ‘Straight Hair’, ‘Wavy Hair’, ‘Wearing Ear-rings’, ‘Wearing Hat’, ‘Wearing Lipstick’, ‘Wearing Neck-lace’, ‘Wearing Necktie’ and ‘Young’. It’</p> <h2>From PubFig Dataset</h2> <ul> <li>Male</li> diff --git a/site/public/research/_from_1_to_100_pixels/index.html b/site/public/research/_from_1_to_100_pixels/index.html new file mode 100644 index 00000000..74f334cc --- /dev/null +++ b/site/public/research/_from_1_to_100_pixels/index.html @@ -0,0 +1,172 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels: From 1 to 100 Pixels</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="High resolution insights from low resolution imagery" /> + <meta property="og:title" content="MegaPixels: From 1 to 100 Pixels"/> + <meta property="og:type" content="website"/> + <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/> + <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/research/_from_1_to_100_pixels/assets/intro.jpg" /> + <meta property="og:url" content="https://megapixels.cc/research/_from_1_to_100_pixels/"/> + <meta property="og:site_name" content="MegaPixels" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/> + <meta name="apple-mobile-web-app-status-bar-style" content="black"> + <meta name="apple-mobile-web-app-capable" content="yes"> + + <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png"> + <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png"> + <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png"> + <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png"> + <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png"> + <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png"> + <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png"> + <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png"> + <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png"> + <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png"> + <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png"> + <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png"> + <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png"> + <link rel="manifest" href="/assets/img/favicon/manifest.json"> + <meta name="msapplication-TileColor" content="#ffffff"> + <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> + <meta name="theme-color" content="#ffffff"> + + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> + <link rel='stylesheet' href='/assets/css/mobile.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/about/news">News</a> + </div> + </header> + <div class="content content-"> + + <section> + <h1>From 1 to 100 Pixels</h1> + <div class='meta'> + <div> + <div class='gray'>Posted</div> + <div>2018-12-04</div> + </div> + <div> + <div class='gray'>By</div> + <div>Adam Harvey</div> + </div> + + </div> + </section> + + <section><h3>High resolution insights from low resolution data</h3> +<p>This post will be about the meaning of "face". How do people define it? How to biometrics researchers define it? How has it changed during the last decade.</p> +<p>What can you know from a very small amount of information?</p> +<ul> +<li>1 pixel grayscale</li> +<li>2x2 pixels grayscale, font example, can encode letters</li> +<li>3x3 pixels: can create a font</li> +<li>4x4 pixels: how many variations</li> +<li>8x8 yotta yotta, many more variations</li> +<li>5x7 face recognition </li> +<li>12x16 activity recognition</li> +<li>6/5 (up to 124/106) pixels in height/width, and the average is 24/20 for QMUL SurvFace</li> +<li>(prepare a Progan render of the QMUL dataset and TinyFaces)</li> +<li>20x16 tiny faces paper</li> +<li>20x20 MNIST handwritten images <a href="http://yann.lecun.com/exdb/mnist/">http://yann.lecun.com/exdb/mnist/</a></li> +<li>24x24 haarcascade detector idealized images</li> +<li>32x32 CIFAR image dataset</li> +<li>40x40 can do emotion detection, face recognition at scale, 3d modeling of the face. include datasets with faces at this resolution including pedestrian.</li> +<li>NIST standards begin to appear from 40x40, distinguish occular pixels</li> +<li>need more material from 60-100</li> +<li>60x60 show how texture emerges and pupils, eye color, higher resolution of features and compare to lower resolution faces</li> +<li>100x100 all you need for medical diagnosis</li> +<li>100x100 0.5% of one Instagram photo</li> +</ul> +<p>Notes:</p> +<ul> +<li>Google FaceNet used images with (face?) sizes: Input sizes range from 96x96 pixels to 224x224pixels in our experiments. FaceNet: A Unified Embedding for Face Recognition and Clustering <a href="https://arxiv.org/pdf/1503.03832.pdf">https://arxiv.org/pdf/1503.03832.pdf</a></li> +</ul> +<p>Ideas:</p> +<ul> +<li>Find specific cases of facial resolution being used in legal cases, forensic investigations, or military footage</li> +<li>resolution of boston bomber face</li> +<li>resolution of the state of the union image</li> +</ul> +<h3>Research</h3> +<ul> +<li>NIST report on sres states several resolutions</li> +<li>"Results show that the tested face recognition systems yielded similar performance for query sets with eye-to-eye distance from 60 pixels to 30 pixels" <sup class="footnote-ref" id="fnref-nist_sres"><a href="#fn-nist_sres">1</a></sup></li> +</ul> +<ul> +<li>"Note that we only keep the images with a minimal side length of 80 pixels." and "a face will be labeled as “Ignore” if it is very difficult to be detected due to blurring, severe deformation and unrecognizable eyes, or the side length of its bounding box is less than 32 pixels." Ge_Detecting_Masked_Faces_CVPR_2017_paper.pdf </li> +<li>IBM DiF: "Faces with region size less than 50x50 or inter-ocular distance of less than 30 pixels were discarded. Faces with non-frontal pose, or anything beyond being slightly tilted to the left or the right, were also discarded."</li> +</ul> +<p>As the resolution +formatted as rectangular databases of 16 bit RGB-tuples or 8 bit grayscale values</p> +<p>To consider how visual privacy applies to real world surveillance situations, the first</p> +<p>A single 8-bit grayscale pixel with 256 values is enough to represent the entire alphabet <code>a-Z0-9</code> with room to spare.</p> +<p>A 2x2 pixels contains</p> +<p>Using no more than a 42 pixel (6x7 image) face image researchers [cite] were able to correctly distinguish between a group of 50 people. Yet</p> +<p>The likely outcome of face recognition research is that more data is needed to improve. Indeed, resolution is the determining factor for all biometric systems, both as training data to increase</p> +<p>Pixels, typically considered the buiding blocks of images and vidoes, can also be plotted as a graph of sensor values corresponding to the intensity of RGB-calibrated sensors.</p> +<p>Wi-Fi and cameras presents elevated risks for transmitting videos and image documentation from conflict zones, high-risk situations, or even sharing on social media. How can new developments in computer vision also be used in reverse, as a counter-forensic tool, to minimize an individual's privacy risk?</p> +<p>As the global Internet becomes increasingly effecient at turning the Internet into a giant dataset for machine learning, forensics, and data analysing, it would be prudent to also consider tools for decreasing the resolution. The Visual Defense module is just that. What are new ways to minimize the adverse effects of surveillance by dulling the blade. For example, a researcher paper showed that by decreasing a face size to 12x16 it was possible to do 98% accuracy with 50 people. This is clearly an example of</p> +<p>This research module, tentatively called Visual Defense Tools, aims to explore the</p> +<h3>Prior Research</h3> +<ul> +<li>MPI visual privacy advisor</li> +<li>NIST: super resolution</li> +<li>YouTube blur tool</li> +<li>WITNESS: blur tool</li> +<li>Pixellated text </li> +<li>CV Dazzle</li> +<li>Bellingcat guide to geolocation</li> +<li>Peng! magic passport</li> +</ul> +<h3>Notes</h3> +<ul> +<li>In China, out of the approximately 200 million surveillance cameras only about 15% have enough resolution for face recognition. </li> +<li>In Apple's FaceID security guide, the probability of someone else's face unlocking your phone is 1 out of 1,000,000. </li> +<li>In England, the Metropolitan Police reported a false-positive match rate of 98% when attempting to use face recognition to locate wanted criminals. </li> +<li>In a face recognition trial at Berlin's Sudkreuz station, the false-match rate was 20%. </li> +</ul> +<p>What all 3 examples illustrate is that face recognition is anything but absolute. In a 2017 talk, Jason Matheny the former directory of IARPA, admitted the face recognition is so brittle it can be subverted by using a magic marker and drawing "a few dots on your forehead". In fact face recognition is a misleading term. Face recognition is search engine for faces that can only ever show you the mos likely match. This presents real a real threat to privacy and lends</p> +<p>Globally, iPhone users unwittingly agree to 1/1,000,000 probably +relying on FaceID and TouchID to protect their information agree to a</p> +<div class="footnotes"> +<hr> +<ol><li id="fn-nist_sres"><p>NIST 906932. Performance Assessment of Face Recognition Using Super-Resolution. Shuowen Hu, Robert Maschal, S. Susan Young, Tsai Hong Hong, Jonathon P. Phillips<a href="#fnref-nist_sres" class="footnote">↩</a></p></li> +</ol> +</div> +</section> + + </div> + <footer> + <ul class="footer-left"> + <li><a href="/">MegaPixels.cc</a></li> + <li><a href="/datasets/">Datasets</a></li> + <li><a href="/about/">About</a></li> + <li><a href="/about/news/">News</a></li> + <li><a href="/about/legal/">Legal & Privacy</a></li> + </ul> + <ul class="footer-right"> + <li>MegaPixels ©2017-19 <a href="https://ahprojects.com">Adam R. Harvey</a></li> + <li>Made with support from <a href="https://mozilla.org">Mozilla</a></li> + </ul> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/research/_introduction/index.html b/site/public/research/_introduction/index.html new file mode 100644 index 00000000..66905247 --- /dev/null +++ b/site/public/research/_introduction/index.html @@ -0,0 +1,106 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels: Introducing MegaPixels</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="Introduction to Megapixels" /> + <meta property="og:title" content="MegaPixels: Introducing MegaPixels"/> + <meta property="og:type" content="website"/> + <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/> + <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg" /> + <meta property="og:url" content="https://megapixels.cc/research/_introduction/"/> + <meta property="og:site_name" content="MegaPixels" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/> + <meta name="apple-mobile-web-app-status-bar-style" content="black"> + <meta name="apple-mobile-web-app-capable" content="yes"> + + <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png"> + <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png"> + <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png"> + <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png"> + <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png"> + <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png"> + <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png"> + <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png"> + <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png"> + <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png"> + <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png"> + <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png"> + <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png"> + <link rel="manifest" href="/assets/img/favicon/manifest.json"> + <meta name="msapplication-TileColor" content="#ffffff"> + <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> + <meta name="theme-color" content="#ffffff"> + + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> + <link rel='stylesheet' href='/assets/css/mobile.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/about/news">News</a> + </div> + </header> + <div class="content content-dataset"> + + <section> + <h1>Introducing MegaPixels</h1> + <div class='meta'> + <div> + <div class='gray'>Posted</div> + <div>2018-12-15</div> + </div> + <div> + <div class='gray'>By</div> + <div>Adam Harvey</div> + </div> + + </div> + </section> + + <section><p>Face recognition has become the focal point for ...</p> +<p>Add 68pt landmarks animation</p> +<p>But biometric currency is ...</p> +<p>Add rotation 3D head</p> +<p>Inflationary...</p> +<p>Add Theresea May 3D</p> +<p>(comission for CPDP)</p> +<p>Add info from the AI Traps talk</p> +<ul> +<li>Posted: Dec. 15</li> +<li>Author: Adam Harvey</li> +</ul> +</section><section class='applet_container'><div class='applet' data-payload='{"command": "load_file /site/research/00_introduction/assets/summary_countries_top.csv", "fields": ["country, Xcitations"]}'></div></section><section><p>Paragraph text to test css formatting. Paragraph text to test css formatting. Paragraph text to test css formatting. Paragraph text to test css formatting. Paragraph text to test css formatting.</p> +<p>[ page under development ]</p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/_introduction/assets/test.png' alt=' This is the caption'><div class='caption'> This is the caption</div></div></section> + + </div> + <footer> + <ul class="footer-left"> + <li><a href="/">MegaPixels.cc</a></li> + <li><a href="/datasets/">Datasets</a></li> + <li><a href="/about/">About</a></li> + <li><a href="/about/news/">News</a></li> + <li><a href="/about/legal/">Legal & Privacy</a></li> + </ul> + <ul class="footer-right"> + <li>MegaPixels ©2017-19 <a href="https://ahprojects.com">Adam R. Harvey</a></li> + <li>Made with support from <a href="https://mozilla.org">Mozilla</a></li> + </ul> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/research/_what_computers_can_see/index.html b/site/public/research/_what_computers_can_see/index.html new file mode 100644 index 00000000..003dd733 --- /dev/null +++ b/site/public/research/_what_computers_can_see/index.html @@ -0,0 +1,357 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels: What Computers Can See</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="What Computers Can See" /> + <meta property="og:title" content="MegaPixels: What Computers Can See"/> + <meta property="og:type" content="website"/> + <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/> + <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg" /> + <meta property="og:url" content="https://megapixels.cc/research/_what_computers_can_see/"/> + <meta property="og:site_name" content="MegaPixels" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/> + <meta name="apple-mobile-web-app-status-bar-style" content="black"> + <meta name="apple-mobile-web-app-capable" content="yes"> + + <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png"> + <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png"> + <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png"> + <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png"> + <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png"> + <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png"> + <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png"> + <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png"> + <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png"> + <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png"> + <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png"> + <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png"> + <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png"> + <link rel="manifest" href="/assets/img/favicon/manifest.json"> + <meta name="msapplication-TileColor" content="#ffffff"> + <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> + <meta name="theme-color" content="#ffffff"> + + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> + <link rel='stylesheet' href='/assets/css/mobile.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/about/news">News</a> + </div> + </header> + <div class="content content-"> + + <section> + <h1>What Computers Can See</h1> + <div class='meta'> + <div> + <div class='gray'>Posted</div> + <div>2018-12-15</div> + </div> + <div> + <div class='gray'>By</div> + <div>Adam Harvey</div> + </div> + + </div> + </section> + + <section><p>Rosalind Picard on Affective Computing Podcast with Lex Fridman</p> +<ul> +<li>we can read with an ordinary camera on your phone, from a neutral face if</li> +<li>your heart is racing</li> +<li>if your breating is becoming irregular and showing signs of stress</li> +<li>how your heart rate variability power is changing even when your heart is not necessarily accelerating</li> +<li>we can tell things about your stress even if you have a blank face</li> +</ul> +<p>in emotion studies</p> +<ul> +<li>when participants use smartphone and multiple data types are collected to understand patterns of life can predict tomorrow's mood</li> +<li>get best results </li> +<li>better than 80% accurate at predicting tomorrow's mood levels</li> +</ul> +<p>A list of 100 things computer vision can see, eg:</p> +<ul> +<li>age, race, gender, ancestral origin, body mass index</li> +<li>eye color, hair color, facial hair, glasses</li> +<li>beauty score, </li> +<li>intelligence</li> +<li>what you're looking at</li> +<li>medical conditions</li> +<li>tired, drowsiness in car</li> +<li>affectiva: interest in product, intent to buy</li> +</ul> +<h2>From SenseTime paper</h2> +<p>Exploring Disentangled Feature Representation Beyond Face Identification</p> +<p>From <a href="https://arxiv.org/pdf/1804.03487.pdf">https://arxiv.org/pdf/1804.03487.pdf</a> +The attribute IDs from 1 to 40 corre-spond to: ‘5 o Clock Shadow’, ‘Arched Eyebrows’, ‘Attractive’, ‘Bags Under Eyes’, ‘Bald’, ‘Bangs’, ‘Big Lips’, ‘BigNose’, ‘Black Hair’, ‘Blond Hair’, ‘Blurry’, ‘Brown Hair’,‘Bushy Eyebrows’, ‘Chubby’, ‘Double Chin’, ‘Eyeglasses’,‘Goatee’, ‘Gray Hair’, ‘Heavy Makeup’, ‘High Cheek-bones’, ‘Male’, ‘Mouth Slightly Open’, ‘Mustache’, ‘Nar-row Eyes’, ‘No Beard’, ‘Oval Face’, ‘Pale Skin’, ‘PointyNose’, ‘Receding Hairline’, ‘Rosy Cheeks’, ‘Sideburns’,‘Smiling’, ‘Straight Hair’, ‘Wavy Hair’, ‘Wearing Ear-rings’, ‘Wearing Hat’, ‘Wearing Lipstick’, ‘Wearing Neck-lace’, ‘Wearing Necktie’ and ‘Young’. It’</p> +<h2>From PubFig Dataset</h2> +<ul> +<li>Male</li> +<li>Asian</li> +<li>White</li> +<li>Black</li> +<li>Baby</li> +<li>Child</li> +<li>Youth</li> +<li>Middle Aged</li> +<li>Senior</li> +<li>Black Hair</li> +<li>Blond Hair</li> +<li>Brown Hair</li> +<li>Bald</li> +<li>No Eyewear</li> +<li>Eyeglasses</li> +<li>Sunglasses</li> +<li>Mustache</li> +<li>Smiling Frowning</li> +<li>Chubby</li> +<li>Blurry</li> +<li>Harsh Lighting</li> +<li>Flash</li> +<li>Soft Lighting</li> +<li>Outdoor Curly Hair</li> +<li>Wavy Hair</li> +<li>Straight Hair</li> +<li>Receding Hairline</li> +<li>Bangs</li> +<li>Sideburns</li> +<li>Fully Visible Forehead </li> +<li>Partially Visible Forehead </li> +<li>Obstructed Forehead</li> +<li>Bushy Eyebrows </li> +<li>Arched Eyebrows</li> +<li>Narrow Eyes</li> +<li>Eyes Open</li> +<li>Big Nose</li> +<li>Pointy Nose</li> +<li>Big Lips</li> +<li>Mouth Closed</li> +<li>Mouth Slightly Open</li> +<li>Mouth Wide Open</li> +<li>Teeth Not Visible</li> +<li>No Beard</li> +<li>Goatee </li> +<li>Round Jaw</li> +<li>Double Chin</li> +<li>Wearing Hat</li> +<li>Oval Face</li> +<li>Square Face</li> +<li>Round Face </li> +<li>Color Photo</li> +<li>Posed Photo</li> +<li>Attractive Man</li> +<li>Attractive Woman</li> +<li>Indian</li> +<li>Gray Hair</li> +<li>Bags Under Eyes</li> +<li>Heavy Makeup</li> +<li>Rosy Cheeks</li> +<li>Shiny Skin</li> +<li>Pale Skin</li> +<li>5 o' Clock Shadow</li> +<li>Strong Nose-Mouth Lines</li> +<li>Wearing Lipstick</li> +<li>Flushed Face</li> +<li>High Cheekbones</li> +<li>Brown Eyes</li> +<li>Wearing Earrings</li> +<li>Wearing Necktie</li> +<li>Wearing Necklace</li> +</ul> +<p>for i in {1..9};do wget <a href="http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_0$i.MP4;done;for">http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_0$i.MP4;done;for</a> i in {10..20}; do wget <a href="http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_$i.MP4;done">http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_$i.MP4;done</a></p> +<h2>From Market 1501</h2> +<p>The 27 attributes are:</p> +<table> +<thead><tr> +<th style="text-align:center">attribute</th> +<th style="text-align:center">representation in file</th> +<th style="text-align:center">label</th> +</tr> +</thead> +<tbody> +<tr> +<td style="text-align:center">gender</td> +<td style="text-align:center">gender</td> +<td style="text-align:center">male(1), female(2)</td> +</tr> +<tr> +<td style="text-align:center">hair length</td> +<td style="text-align:center">hair</td> +<td style="text-align:center">short hair(1), long hair(2)</td> +</tr> +<tr> +<td style="text-align:center">sleeve length</td> +<td style="text-align:center">up</td> +<td style="text-align:center">long sleeve(1), short sleeve(2)</td> +</tr> +<tr> +<td style="text-align:center">length of lower-body clothing</td> +<td style="text-align:center">down</td> +<td style="text-align:center">long lower body clothing(1), short(2)</td> +</tr> +<tr> +<td style="text-align:center">type of lower-body clothing</td> +<td style="text-align:center">clothes</td> +<td style="text-align:center">dress(1), pants(2)</td> +</tr> +<tr> +<td style="text-align:center">wearing hat</td> +<td style="text-align:center">hat</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">carrying backpack</td> +<td style="text-align:center">backpack</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">carrying bag</td> +<td style="text-align:center">bag</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">carrying handbag</td> +<td style="text-align:center">handbag</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">age</td> +<td style="text-align:center">age</td> +<td style="text-align:center">young(1), teenager(2), adult(3), old(4)</td> +</tr> +<tr> +<td style="text-align:center">8 color of upper-body clothing</td> +<td style="text-align:center">upblack, upwhite, upred, uppurple, upyellow, upgray, upblue, upgreen</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">9 color of lower-body clothing</td> +<td style="text-align:center">downblack, downwhite, downpink, downpurple, downyellow, downgray, downblue, downgreen,downbrown</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +</tbody> +</table> +<p>source: <a href="https://github.com/vana77/Market-1501_Attribute/blob/master/README.md">https://github.com/vana77/Market-1501_Attribute/blob/master/README.md</a></p> +<h2>From DukeMTMC</h2> +<p>The 23 attributes are:</p> +<table> +<thead><tr> +<th style="text-align:center">attribute</th> +<th style="text-align:center">representation in file</th> +<th style="text-align:center">label</th> +</tr> +</thead> +<tbody> +<tr> +<td style="text-align:center">gender</td> +<td style="text-align:center">gender</td> +<td style="text-align:center">male(1), female(2)</td> +</tr> +<tr> +<td style="text-align:center">length of upper-body clothing</td> +<td style="text-align:center">top</td> +<td style="text-align:center">short upper body clothing(1), long(2)</td> +</tr> +<tr> +<td style="text-align:center">wearing boots</td> +<td style="text-align:center">boots</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">wearing hat</td> +<td style="text-align:center">hat</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">carrying backpack</td> +<td style="text-align:center">backpack</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">carrying bag</td> +<td style="text-align:center">bag</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">carrying handbag</td> +<td style="text-align:center">handbag</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">color of shoes</td> +<td style="text-align:center">shoes</td> +<td style="text-align:center">dark(1), light(2)</td> +</tr> +<tr> +<td style="text-align:center">8 color of upper-body clothing</td> +<td style="text-align:center">upblack, upwhite, upred, uppurple, upgray, upblue, upgreen, upbrown</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +<tr> +<td style="text-align:center">7 color of lower-body clothing</td> +<td style="text-align:center">downblack, downwhite, downred, downgray, downblue, downgreen, downbrown</td> +<td style="text-align:center">no(1), yes(2)</td> +</tr> +</tbody> +</table> +<p>source: <a href="https://github.com/vana77/DukeMTMC-attribute/blob/master/README.md">https://github.com/vana77/DukeMTMC-attribute/blob/master/README.md</a></p> +<h2>From H3D Dataset</h2> +<p>The joints and other keypoints (eyes, ears, nose, shoulders, elbows, wrists, hips, knees and ankles) +The 3D pose inferred from the keypoints. +Visibility boolean for each keypoint +Region annotations (upper clothes, lower clothes, dress, socks, shoes, hands, gloves, neck, face, hair, hat, sunglasses, bag, occluder) +Body type (male, female or child)</p> +<p>source: <a href="https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/shape/h3d/">https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/shape/h3d/</a></p> +<h2>From Leeds Sports Pose</h2> +<p>=INDEX(A2:A9,MATCH(datasets!D1,B2:B9,0)) +=VLOOKUP(A2, datasets!A:J, 7, FALSE)</p> +<p>Right ankle +Right knee +Right hip +Left hip +Left knee +Left ankle +Right wrist +Right elbow +Right shoulder +Left shoulder +Left elbow +Left wrist +Neck +Head top</p> +<p>source: <a href="http://web.archive.org/web/20170915023005/sam.johnson.io/research/lsp.html">http://web.archive.org/web/20170915023005/sam.johnson.io/research/lsp.html</a></p> +</section> + + </div> + <footer> + <ul class="footer-left"> + <li><a href="/">MegaPixels.cc</a></li> + <li><a href="/datasets/">Datasets</a></li> + <li><a href="/about/">About</a></li> + <li><a href="/about/news/">News</a></li> + <li><a href="/about/legal/">Legal & Privacy</a></li> + </ul> + <ul class="footer-right"> + <li>MegaPixels ©2017-19 <a href="https://ahprojects.com">Adam R. Harvey</a></li> + <li>Made with support from <a href="https://mozilla.org">Mozilla</a></li> + </ul> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/research/index.html b/site/public/research/index.html index 007431bd..571b8230 100644 --- a/site/public/research/index.html +++ b/site/public/research/index.html @@ -56,7 +56,7 @@ <div class="content content-"> <section><h1>Research Blog</h1> -</section> +</section><div class='research_index'><a href='/research/_introduction/'><section class='wide'><img src='data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==' alt='Research post' /><section><h1>Introducing MegaPixels</h1><h2></h2></section></section></a><a href='/research/munich_security_conference/'><section class='wide'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/background.jpg' alt='Research post' /><section><h1>Transnational Data Analysis of Publicly Available Face Recognition Training Datasets</h1><h2></h2></section></section></a></div> </div> <footer> diff --git a/site/public/research/munich_security_conference/index.html b/site/public/research/munich_security_conference/index.html new file mode 100644 index 00000000..499d8e9f --- /dev/null +++ b/site/public/research/munich_security_conference/index.html @@ -0,0 +1,123 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels: MSC</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="Analyzing the Transnational Flow of Facial Recognition Data" /> + <meta property="og:title" content="MegaPixels: MSC"/> + <meta property="og:type" content="website"/> + <meta property="og:summary" content="MegaPixels is an art and research project about face recognition datasets created \"in the wild\"/> + <meta property="og:image" content="https://nyc3.digitaloceanspaces.com/megapixels/v1/research/munich_security_conference/assets/background.jpg" /> + <meta property="og:url" content="https://megapixels.cc/research/munich_security_conference/"/> + <meta property="og:site_name" content="MegaPixels" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/> + <meta name="apple-mobile-web-app-status-bar-style" content="black"> + <meta name="apple-mobile-web-app-capable" content="yes"> + + <link rel="apple-touch-icon" sizes="57x57" href="/assets/img/favicon/apple-icon-57x57.png"> + <link rel="apple-touch-icon" sizes="60x60" href="/assets/img/favicon/apple-icon-60x60.png"> + <link rel="apple-touch-icon" sizes="72x72" href="/assets/img/favicon/apple-icon-72x72.png"> + <link rel="apple-touch-icon" sizes="76x76" href="/assets/img/favicon/apple-icon-76x76.png"> + <link rel="apple-touch-icon" sizes="114x114" href="/assets/img/favicon/apple-icon-114x114.png"> + <link rel="apple-touch-icon" sizes="120x120" href="/assets/img/favicon/apple-icon-120x120.png"> + <link rel="apple-touch-icon" sizes="144x144" href="/assets/img/favicon/apple-icon-144x144.png"> + <link rel="apple-touch-icon" sizes="152x152" href="/assets/img/favicon/apple-icon-152x152.png"> + <link rel="apple-touch-icon" sizes="180x180" href="/assets/img/favicon/apple-icon-180x180.png"> + <link rel="icon" type="image/png" sizes="192x192" href="/assets/img/favicon/android-icon-192x192.png"> + <link rel="icon" type="image/png" sizes="32x32" href="/assets/img/favicon/favicon-32x32.png"> + <link rel="icon" type="image/png" sizes="96x96" href="/assets/img/favicon/favicon-96x96.png"> + <link rel="icon" type="image/png" sizes="16x16" href="/assets/img/favicon/favicon-16x16.png"> + <link rel="manifest" href="/assets/img/favicon/manifest.json"> + <meta name="msapplication-TileColor" content="#ffffff"> + <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> + <meta name="theme-color" content="#ffffff"> + + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> + <link rel='stylesheet' href='/assets/css/mobile.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + <a href="/about/news">News</a> + </div> + </header> + <div class="content content-dataset"> + + <section> + <h1>MSC</h1> + <div class='meta'> + <div> + <div class='gray'>Posted</div> + <div>2019-4-18</div> + </div> + <div> + <div class='gray'>By</div> + <div>Adam Harvey</div> + </div> + + </div> + </section> + + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Analyzing the Transnational Flow of Facial Recognition Data</span></div><div class='hero_subdesc'><span class='bgpad'>Where does face data originate and who's using it? +</span></div></div></section><section><p>[page under devlopment]</p> +<p>Intro paragraph.</p> +<p>[ add montage of extracted faces here]</p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/montage_placeholder.jpg' alt=' Placeholder caption'><div class='caption'> Placeholder caption</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/bar_placeholder.png' alt=' Placeholder caption'><div class='caption'> Placeholder caption</div></div></section><section><div class='columns columns-2'><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/pie_placeholder.png' alt=' Placeholder caption'><div class='caption'> Placeholder caption</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/research/munich_security_conference/assets/pie_placeholder.png' alt=' Placeholder caption'><div class='caption'> Placeholder caption</div></div></section></div></section><section> + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h2>Supplementary Information</h2> + +</section><section><p>[ add a download button for CSV data ]</p> +</section><section class='applet_container'><div class='applet' data-payload='{"command": "load_file /site/research/munich_security_conference/assets/embassy_counts_public.csv", "fields": ["Images, Dataset, Embassy, Flickr ID, URL, Guest, Host"]}'></div></section><section> + + <h4>Cite Our Work</h4> + <p> + + If you find this analysis helpful, please cite our work: + +<pre id="cite-bibtex"> +@online{megapixels, + author = {Harvey, Adam. LaPlace, Jules.}, + title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets}, + year = 2019, + url = {https://megapixels.cc/}, + urldate = {2019-04-18} +}</pre> + + </p> +</section> + + </div> + <footer> + <ul class="footer-left"> + <li><a href="/">MegaPixels.cc</a></li> + <li><a href="/datasets/">Datasets</a></li> + <li><a href="/about/">About</a></li> + <li><a href="/about/news/">News</a></li> + <li><a href="/about/legal/">Legal & Privacy</a></li> + </ul> + <ul class="footer-right"> + <li>MegaPixels ©2017-19 <a href="https://ahprojects.com">Adam R. Harvey</a></li> + <li>Made with support from <a href="https://mozilla.org">Mozilla</a></li> + </ul> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file |
