summaryrefslogtreecommitdiff
path: root/site/public
diff options
context:
space:
mode:
authorJules Laplace <julescarbon@gmail.com>2018-12-05 16:19:50 +0100
committerJules Laplace <julescarbon@gmail.com>2018-12-05 16:19:50 +0100
commit2a1b884e841efe562e0c84885a404819433b3405 (patch)
tree34b2ef4c37099a84f341ae54c60e0a19af271ab4 /site/public
parentd69086a1b2d7d6e6def55f35e30d0623701de011 (diff)
styling images
Diffstat (limited to 'site/public')
-rw-r--r--site/public/about/credits/index.html5
-rw-r--r--site/public/about/disclaimer/index.html2
-rw-r--r--site/public/about/index.html5
-rw-r--r--site/public/about/press/index.html5
-rw-r--r--site/public/about/privacy/index.html2
-rw-r--r--site/public/about/style/index.html23
-rw-r--r--site/public/about/terms/index.html2
-rw-r--r--site/public/datasets/lfw/index.html56
-rw-r--r--site/public/datasets/vgg_faces2/index.html5
-rw-r--r--site/public/index.html63
-rw-r--r--site/public/research/01_from_1_to_100_pixels/index.html101
11 files changed, 229 insertions, 40 deletions
diff --git a/site/public/about/credits/index.html b/site/public/about/credits/index.html
index 0b3f9db8..9fec7e64 100644
--- a/site/public/about/credits/index.html
+++ b/site/public/about/credits/index.html
@@ -20,15 +20,14 @@
<div class='links'>
<a href="/search">Face Search</a>
<a href="/datasets">Datasets</a>
- <a href="/research/from_1_to_100_pixels/">Research</a>
+ <a href="/">Research</a>
<a href="/about">About</a>
</div>
</header>
<div class="content">
<section><h1>Credits</h1>
-<p><img src="https://nyc3.digitaloceanspaces.com/megapixels/v1/site/about/assets/test.jpg" alt="alt text"></p>
-<ul>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/about/assets/test.jpg' alt='alt text'><div class='caption'>alt text</div></div></section><section><ul>
<li>MegaPixels by Adam Harvey</li>
<li>Made with support from Mozilla</li>
<li>Site developed by Jules Laplace</li>
diff --git a/site/public/about/disclaimer/index.html b/site/public/about/disclaimer/index.html
index 1c14a97c..553bf084 100644
--- a/site/public/about/disclaimer/index.html
+++ b/site/public/about/disclaimer/index.html
@@ -20,7 +20,7 @@
<div class='links'>
<a href="/search">Face Search</a>
<a href="/datasets">Datasets</a>
- <a href="/research/from_1_to_100_pixels/">Research</a>
+ <a href="/">Research</a>
<a href="/about">About</a>
</div>
</header>
diff --git a/site/public/about/index.html b/site/public/about/index.html
index 8441e317..363e8fc0 100644
--- a/site/public/about/index.html
+++ b/site/public/about/index.html
@@ -20,14 +20,13 @@
<div class='links'>
<a href="/search">Face Search</a>
<a href="/datasets">Datasets</a>
- <a href="/research/from_1_to_100_pixels/">Research</a>
+ <a href="/">Research</a>
<a href="/about">About</a>
</div>
</header>
<div class="content">
- <section><p><img src="https://nyc3.digitaloceanspaces.com/megapixels/v1/site/about/assets/test.jpg" alt="alt text"></p>
-<ul>
+ <section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/about/assets/test.jpg' alt='alt text'><div class='caption'>alt text</div></div></section><section><ul>
<li>MegaPixels by Adam Harvey</li>
<li>Made with support from Mozilla</li>
<li>Site developed by Jules Laplace</li>
diff --git a/site/public/about/press/index.html b/site/public/about/press/index.html
index 76ba90e4..aa6e5e13 100644
--- a/site/public/about/press/index.html
+++ b/site/public/about/press/index.html
@@ -20,15 +20,14 @@
<div class='links'>
<a href="/search">Face Search</a>
<a href="/datasets">Datasets</a>
- <a href="/research/from_1_to_100_pixels/">Research</a>
+ <a href="/">Research</a>
<a href="/about">About</a>
</div>
</header>
<div class="content">
<section><h1>Press</h1>
-<p><img src="https://nyc3.digitaloceanspaces.com/megapixels/v1/site/about/assets/test.jpg" alt="alt text"></p>
-<ul>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/about/assets/test.jpg' alt='alt text'><div class='caption'>alt text</div></div></section><section><ul>
<li>Aug 22, 2018: "Transgender YouTubers had their videos grabbed to train facial recognition software" by James Vincent <a href="https://www.theverge.com/2017/8/22/16180080/transgender-youtubers-ai-facial-recognition-dataset">https://www.theverge.com/2017/8/22/16180080/transgender-youtubers-ai-facial-recognition-dataset</a></li>
<li>Aug 22, 2018: "Transgender YouTubers had their videos grabbed to train facial recognition software" by James Vincent <a href="https://www.theverge.com/2017/8/22/16180080/transgender-youtubers-ai-facial-recognition-dataset">https://www.theverge.com/2017/8/22/16180080/transgender-youtubers-ai-facial-recognition-dataset</a></li>
<li>Aug 22, 2018: "Transgender YouTubers had their videos grabbed to train facial recognition software" by James Vincent <a href="https://www.theverge.com/2017/8/22/16180080/transgender-youtubers-ai-facial-recognition-dataset">https://www.theverge.com/2017/8/22/16180080/transgender-youtubers-ai-facial-recognition-dataset</a></li>
diff --git a/site/public/about/privacy/index.html b/site/public/about/privacy/index.html
index 21fd2255..d1ec1c77 100644
--- a/site/public/about/privacy/index.html
+++ b/site/public/about/privacy/index.html
@@ -20,7 +20,7 @@
<div class='links'>
<a href="/search">Face Search</a>
<a href="/datasets">Datasets</a>
- <a href="/research/from_1_to_100_pixels/">Research</a>
+ <a href="/">Research</a>
<a href="/about">About</a>
</div>
</header>
diff --git a/site/public/about/style/index.html b/site/public/about/style/index.html
index 2e0c80d0..24e6f5be 100644
--- a/site/public/about/style/index.html
+++ b/site/public/about/style/index.html
@@ -20,21 +20,19 @@
<div class='links'>
<a href="/search">Face Search</a>
<a href="/datasets">Datasets</a>
- <a href="/research/from_1_to_100_pixels/">Research</a>
+ <a href="/">Research</a>
<a href="/about">About</a>
</div>
</header>
<div class="content">
- <section><p><img src="https://nyc3.digitaloceanspaces.com/megapixels/v1/site/about/assets/test.jpg" alt="Alt text here"></p>
-<h1>Header 1</h1>
-<h2>Header 2</h2>
+ <section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/about/assets/test.jpg' alt='Alt text here'><div class='caption'>Alt text here</div></div></section><section><h2>Header 2</h2>
<h3>Header 3</h3>
<h4>Header 4</h4>
<h5>Header 5</h5>
<h6>Header 6</h6>
<p><strong>Bold text</strong>, <em>italic text</em>, <strong><em>bold italic text</em></strong></p>
-<p>At vero eos et et iusto qui blanditiis <a href="#">praesentium voluptatum</a> deleniti atque corrupti<sup class="footnote-ref" id="fnref-1"><a href="#fn-1">1</a></sup>, quos dolores et quas molestias excepturi sint, obcaecati cupiditate non-provident, similique sunt in culpa, qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio<sup class="footnote-ref" id="fnref-2"><a href="#fn-2">2</a></sup>. Nam libero tempore, cum soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus<sup class="footnote-ref" id="fnref-3"><a href="#fn-3">3</a></sup>.</p>
+<p>At vero eos et et iusto qui blanditiis <a href="#">praesentium voluptatum</a> deleniti atque corrupti[^1], quos dolores et quas molestias excepturi sint, obcaecati cupiditate non-provident, similique sunt in culpa, qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio[^2]. Nam libero tempore, cum soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus[^3].</p>
<ul>
<li>Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium</li>
<li>Totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo</li>
@@ -42,9 +40,15 @@
<li>Odit aut fugit, sed quia consequuntur magni dolores eos</li>
<li>Qui ratione voluptatem sequi nesciunt, neque porro quisquam </li>
</ul>
-<blockquote><p>est, qui dolorem ipsum, quia dolor sit amet consectetur adipisci[ng] velit, sed quia non-numquam [do] eius modi tempora inci[di]dunt, ut labore et dolore magnam aliquam quaerat voluptatem.</p>
+<h2>single image test</h2>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/about/assets/man.jpg' alt='This person is alone'><div class='caption'>This person is alone</div></div></section><section><h2>double image test</h2>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/about/assets/man.jpg' alt='This person is on the left'><div class='caption'>This person is on the left</div></div>
+<div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/about/assets/man.jpg' alt='This person is on the right'><div class='caption'>This person is on the right</div></div></section><section><h2>triple image test</h2>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/about/assets/man.jpg' alt='Person 1'><div class='caption'>Person 1</div></div>
+<div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/about/assets/man.jpg' alt='Person 2'><div class='caption'>Person 2</div></div>
+<div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/about/assets/man.jpg' alt='Person 3. Let me tell you about Person 3. This person has a very long description with text which wraps like crazy'><div class='caption'>Person 3. Let me tell you about Person 3. This person has a very long description with text which wraps like crazy</div></div></section><section><blockquote><p>est, qui dolorem ipsum, quia dolor sit amet consectetur adipisci[ng] velit, sed quia non-numquam [do] eius modi tempora inci[di]dunt, ut labore et dolore magnam aliquam quaerat voluptatem.</p>
</blockquote>
-<p>Inline <code>code</code> has <code>back-ticks around</code> it.</p>
+</section><section class='wide'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/site/about/assets/wide-test.jpg' alt='This image is extremely wide and the text beneath it will wrap but thats fine because it can also contain <a href="https://example.com/">hyperlinks</a>! Yes, you read that right&mdash;hyperlinks! Lorem ipsum dolor sit amet ad volotesque sic hoc ad nauseam'><div class='caption'>This image is extremely wide and the text beneath it will wrap but that's fine because it can also contain <a href="https://example.com/">hyperlinks</a>! Yes, you read that right&mdash;hyperlinks! Lorem ipsum dolor sit amet ad volotesque sic hoc ad nauseam</div></div></section><section><p>Inline <code>code</code> has <code>back-ticks around</code> it.</p>
<pre><code class="lang-javascript">var s = &quot;JavaScript syntax highlighting&quot;;
alert(s);
</code></pre>
@@ -59,10 +63,7 @@ But let's throw in a &lt;b&gt;tag&lt;/b&gt;.
<p>Citations below here</p>
<div class="footnotes">
<hr>
-<ol><li id="fn-1"><p>First source<a href="#fnref-1" class="footnote">&#8617;</a></p></li>
-<li id="fn-2"><p>Second source<a href="#fnref-2" class="footnote">&#8617;</a></p></li>
-<li id="fn-3"><p>Third source<a href="#fnref-3" class="footnote">&#8617;</a></p></li>
-</ol>
+<ol></ol>
</div>
</section>
diff --git a/site/public/about/terms/index.html b/site/public/about/terms/index.html
index 73155546..4b9f4445 100644
--- a/site/public/about/terms/index.html
+++ b/site/public/about/terms/index.html
@@ -20,7 +20,7 @@
<div class='links'>
<a href="/search">Face Search</a>
<a href="/datasets">Datasets</a>
- <a href="/research/from_1_to_100_pixels/">Research</a>
+ <a href="/">Research</a>
<a href="/about">About</a>
</div>
</header>
diff --git a/site/public/datasets/lfw/index.html b/site/public/datasets/lfw/index.html
index 8455bc60..a130c24e 100644
--- a/site/public/datasets/lfw/index.html
+++ b/site/public/datasets/lfw/index.html
@@ -4,7 +4,7 @@
<title>MegaPixels</title>
<meta charset="utf-8" />
<meta name="author" content="Adam Harvey" />
- <meta name="description" content="One of the most widely used facial recognition datasets." />
+ <meta name="description" content="LFW: Labeled Faces in The Wild" />
<meta name="referrer" content="no-referrer" />
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
<link rel='stylesheet' href='/assets/css/fonts.css' />
@@ -20,7 +20,7 @@
<div class='links'>
<a href="/search">Face Search</a>
<a href="/datasets">Datasets</a>
- <a href="/research/from_1_to_100_pixels/">Research</a>
+ <a href="/">Research</a>
<a href="/about">About</a>
</div>
</header>
@@ -31,17 +31,15 @@
<li>Images 13,233</li>
<li>People 5,749</li>
<li>Created From Yahoo News images</li>
-<li>Search available <a href="#">Searchable</a></li>
+<li>Analyzed and searchable</li>
</ul>
-<p>Labeled Faces in The Wild is amongst the most widely used facial recognition training datasets in the world and is the first dataset of its kind to be created entirely from Internet photos. It includes 13,233 images of 5,749 people downloaded from the Internet, otherwise referred to by researchers as “The Wild”.</p>
-<p><img src="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/lfw/assets/lfw_sample.jpg" alt="Eight out of 5,749 people in the Labeled Faces in the Wild dataset. The face recognition training dataset is created entirely from photos downloaded from the Internet."></p>
-<h2>INTRO</h2>
+<p>Labeled Faces in The Wild is amongst the most widely used facial recognition training datasets in the world and is the first dataset of its kind to be created entirely from Internet photos. It includes 13,233 images of 5,749 people downloaded from the Internet, otherwise referred to as “The Wild”.</p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/lfw/assets/lfw_sample.jpg' alt='Eight out of 5,749 people in the Labeled Faces in the Wild dataset. The face recognition training dataset is created entirely from photos downloaded from the Internet.'><div class='caption'>Eight out of 5,749 people in the Labeled Faces in the Wild dataset. The face recognition training dataset is created entirely from photos downloaded from the Internet.</div></div></section><section><h2>INTRO</h2>
<p>It began in 2002. Researchers at University of Massachusetts Amherst were developing algorithms for facial recognition and they needed more data. Between 2002-2004 they scraped Yahoo News for images of public figures. Two years later they cleaned up the dataset and repackaged it as Labeled Faces in the Wild (LFW).</p>
<p>Since then the LFW dataset has become one of the most widely used datasets used for evaluating face recognition algorithms. The associated research paper “Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments” has been cited 996 times reaching 45 different countries throughout the world.</p>
<p>The faces come from news stories and are mostly celebrities from the entertainment industry, politicians, and villains. It’s a sampling of current affairs and breaking news that has come to pass. The images, detached from their original context now server a new purpose: to train, evaluate, and improve facial recognition.</p>
<p>As the most widely used facial recognition dataset, it can be said that each individual in LFW has, in a small way, contributed to the current state of the art in facial recognition surveillance. John Cusack, Julianne Moore, Barry Bonds, Osama bin Laden, and even Moby are amongst these biometric pillars, exemplar faces provided the visual dimensions of a new computer vision future.</p>
-<p><img src="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/lfw/assets/lfw_a_to_c.jpg" alt="From Aaron Eckhart to Zydrunas Ilgauskas. A small sampling of the LFW dataset"></p>
-<p>In addition to commercial use as an evaluation tool, alll of the faces in LFW dataset are prepackaged into a popular machine learning code framework called scikit-learn.</p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/lfw/assets/lfw_a_to_c.jpg' alt='From Aaron Eckhart to Zydrunas Ilgauskas. A small sampling of the LFW dataset'><div class='caption'>From Aaron Eckhart to Zydrunas Ilgauskas. A small sampling of the LFW dataset</div></div></section><section><p>In addition to commercial use as an evaluation tool, alll of the faces in LFW dataset are prepackaged into a popular machine learning code framework called scikit-learn.</p>
<h2>Usage</h2>
<pre><code class="lang-python">#!/usr/bin/python
from matplotlib import plt
@@ -51,11 +49,39 @@ lfw_person = lfw_people[0]
plt.imshow(lfw_person)
</code></pre>
<h2>Commercial Use</h2>
-<p>The LFW dataset is used by numerous companies for benchmarking algorithms and in some cases training. According to the benchmarking results page <sup class="footnote-ref" id="fnref-lfw_results"><a href="#fn-lfw_results">1</a></sup> provided by the authors, over 2 dozen companies have contributed their benchmark results</p>
-<p>(Jules: this load the <code>assets/lfw_vendor_results.csv</code>)</p>
-<p>In benchmarking, companies use a dataset to evaluate their algorithms which are typically trained on other data. After training, researchers will use LFW as a benchmark to compare results with other algorithms.</p>
+<p>The LFW dataset is used by numerous companies for benchmarking algorithms and in some cases training. According to the benchmarking results page [^lfw_results] provided by the authors, over 2 dozen companies have contributed their benchmark results</p>
+<pre><code>load file: lfw_commercial_use.csv
+name_display,company_url,example_url,country,description
+</code></pre>
+<table>
+<thead><tr>
+<th style="text-align:left">Company</th>
+<th style="text-align:left">Country</th>
+<th style="text-align:left">Industries</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td style="text-align:left"><a href="http://www.aratek.co">Aratek</a></td>
+<td style="text-align:left">China</td>
+<td style="text-align:left">Biometric sensors for telecom, civil identification, finance, education, POS, and transportation</td>
+</tr>
+<tr>
+<td style="text-align:left"><a href="http://www.aratek.co">Aratek</a></td>
+<td style="text-align:left">China</td>
+<td style="text-align:left">Biometric sensors for telecom, civil identification, finance, education, POS, and transportation</td>
+</tr>
+<tr>
+<td style="text-align:left"><a href="http://www.aratek.co">Aratek</a></td>
+<td style="text-align:left">China</td>
+<td style="text-align:left">Biometric sensors for telecom, civil identification, finance, education, POS, and transportation</td>
+</tr>
+</tbody>
+</table>
+<p>Add 2-4 screenshots of companies mentioning LFW here</p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/lfw/assets/lfw_screenshot_01.png' alt='ReadSense'><div class='caption'>ReadSense</div></div></section><section><p>In benchmarking, companies use a dataset to evaluate their algorithms which are typically trained on other data. After training, researchers will use LFW as a benchmark to compare results with other algorithms.</p>
<p>For example, Baidu (est. net worth $13B) uses LFW to report results for their "Targeting Ultimate Accuracy: Face Recognition via Deep Embedding". According to the three Baidu researchers who produced the paper:</p>
-<blockquote><p>LFW has been the most popular evaluation benchmark for face recognition, and played a very important role in facilitating the face recognition society to improve algorithm. <sup class="footnote-ref" id="fnref-baidu_lfw"><a href="#fn-baidu_lfw">2</a></sup>.</p>
+<blockquote><p>LFW has been the most popular evaluation benchmark for face recognition, and played a very important role in facilitating the face recognition society to improve algorithm. <sup class="footnote-ref" id="fnref-baidu_lfw"><a href="#fn-baidu_lfw">1</a></sup>.</p>
</blockquote>
<h2>Citations</h2>
<table>
@@ -84,10 +110,12 @@ plt.imshow(lfw_person)
<h2>Conclusion</h2>
<p>The LFW face recognition training and evaluation dataset is a historically important face dataset as it was the first popular dataset to be created entirely from Internet images, paving the way for a global trend towards downloading anyone’s face from the Internet and adding it to a dataset. As will be evident with other datasets, LFW’s approach has now become the norm.</p>
<p>For all the 5,000 people in this datasets, their face is forever a part of facial recognition history. It would be impossible to remove anyone from the dataset because it is so ubiquitous. For their rest of the lives and forever after, these 5,000 people will continue to be used for training facial recognition surveillance.</p>
+<h2>Notes</h2>
+<p>According to BiometricUpdate.com<sup class="footnote-ref" id="fnref-biometric_update_lfw"><a href="#fn-biometric_update_lfw">2</a></sup>, LFW is "the most widely used evaluation set in the field of facial recognition, LFW attracts a few dozen teams from around the globe including Google, Facebook, Microsoft Research Asia, Baidu, Tencent, SenseTime, Face++ and Chinese University of Hong Kong."</p>
<div class="footnotes">
<hr>
-<ol><li id="fn-lfw_results"><p>"LFW Results". Accessed Dec 3, 2018. <a href="http://vis-www.cs.umass.edu/lfw/results.html">http://vis-www.cs.umass.edu/lfw/results.html</a><a href="#fnref-lfw_results" class="footnote">&#8617;</a></p></li>
-<li id="fn-baidu_lfw"><p>"Chinese tourist town uses face recognition as an entry pass". New Scientist. November 17, 2016. <a href="https://www.newscientist.com/article/2113176-chinese-tourist-town-uses-face-recognition-as-an-entry-pass/">https://www.newscientist.com/article/2113176-chinese-tourist-town-uses-face-recognition-as-an-entry-pass/</a><a href="#fnref-baidu_lfw" class="footnote">&#8617;</a></p></li>
+<ol><li id="fn-baidu_lfw"><p>"Chinese tourist town uses face recognition as an entry pass". New Scientist. November 17, 2016. <a href="https://www.newscientist.com/article/2113176-chinese-tourist-town-uses-face-recognition-as-an-entry-pass/">https://www.newscientist.com/article/2113176-chinese-tourist-town-uses-face-recognition-as-an-entry-pass/</a><a href="#fnref-baidu_lfw" class="footnote">&#8617;</a></p></li>
+<li id="fn-biometric_update_lfw"><p>"PING AN Tech facial recognition receives high score in latest LFW test results". <a href="https://www.biometricupdate.com/201702/ping-an-tech-facial-recognition-receives-high-score-in-latest-lfw-test-results">https://www.biometricupdate.com/201702/ping-an-tech-facial-recognition-receives-high-score-in-latest-lfw-test-results</a><a href="#fnref-biometric_update_lfw" class="footnote">&#8617;</a></p></li>
</ol>
</div>
</section>
diff --git a/site/public/datasets/vgg_faces2/index.html b/site/public/datasets/vgg_faces2/index.html
index 19efbbbc..ee353047 100644
--- a/site/public/datasets/vgg_faces2/index.html
+++ b/site/public/datasets/vgg_faces2/index.html
@@ -20,7 +20,7 @@
<div class='links'>
<a href="/search">Face Search</a>
<a href="/datasets">Datasets</a>
- <a href="/research/from_1_to_100_pixels/">Research</a>
+ <a href="/">Research</a>
<a href="/about">About</a>
</div>
</header>
@@ -34,8 +34,7 @@
<li>Search available <a href="#">Searchable</a></li>
</ul>
<p>Labeled Faces in The Wild is amongst the most widely used facial recognition training datasets in the world and is the first dataset of its kind to be created entirely from Internet photos. It includes 13,233 images of 5,749 people downloaded from the Internet, otherwise referred to by researchers as “The Wild”.</p>
-<p><img src="https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/vgg_faces2/datasets/lfw/identity_grid_01.jpg" alt="Eight out of 5,749 people in the Labeled Faces in the Wild dataset. The face recognition training dataset is created entirely from photos downloaded from the Internet."></p>
-<h2>INTRO</h2>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/vgg_faces2/assets/identity_grid_01.jpg' alt='Eight out of 5,749 people in the Labeled Faces in the Wild dataset. The face recognition training dataset is created entirely from photos downloaded from the Internet.'><div class='caption'>Eight out of 5,749 people in the Labeled Faces in the Wild dataset. The face recognition training dataset is created entirely from photos downloaded from the Internet.</div></div></section><section><h2>INTRO</h2>
<p>It began in 2002. Researchers at University of Massachusetts Amherst were developing algorithms for facial recognition and they needed more data. Between 2002-2004 they scraped Yahoo News for images of public figures. Two years later they cleaned up the dataset and repackaged it as Labeled Faces in the Wild (LFW).</p>
<p>Since then the LFW dataset has become one of the most widely used datasets used for evaluating face recognition algorithms. The associated research paper “Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments” has been cited 996 times reaching 45 different countries throughout the world.</p>
<p>The faces come from news stories and are mostly celebrities from the entertainment industry, politicians, and villains. It’s a sampling of current affairs and breaking news that has come to pass. The images, detached from their original context now server a new purpose: to train, evaluate, and improve facial recognition.</p>
diff --git a/site/public/index.html b/site/public/index.html
new file mode 100644
index 00000000..ea3dc24c
--- /dev/null
+++ b/site/public/index.html
@@ -0,0 +1,63 @@
+<!doctype html>
+<html>
+<head>
+ <title>MegaPixels</title>
+ <meta charset="utf-8" />
+ <meta name="author" content="Adam Harvey" />
+ <meta name="description" content="" />
+ <meta name="referrer" content="no-referrer" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <link rel='stylesheet' href='/assets/css/fonts.css' />
+ <link rel='stylesheet' href='/assets/css/css.css' />
+</head>
+<body>
+ <header>
+ <a class='slogan' href="/">
+ <div class='logo'></div>
+ <div class='site_name'>MegaPixels</div>
+ <span class='sub'>The Darkside of Datasets</span>
+ </a>
+ <div class='links'>
+ <a href="/search">Face Search</a>
+ <a href="/datasets">Datasets</a>
+ <a href="/">Research</a>
+ <a href="/about">About</a>
+ </div>
+ </header>
+ <div class="content">
+
+ <section><p>MegaPixels is an art project that explores the dark side of face recognition training data and the future of computer vision</p>
+<p>Made by Adam Harvey in partnership with Mozilla.<br>
+Read more <a href="/about">about MegaPixels</a></p>
+<p>[Explore Datasets] [Explore Algorithms]</p>
+<h2>Facial Recognition Datasets</h2>
+<p>Regular Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.</p>
+<h3>Summary</h3>
+<ul>
+<li>275 datasets found</li>
+<li>Created between the years 1993-2018</li>
+<li>Smallest dataset: 20 images</li>
+<li>Largest dataset: 10,000,000 images</li>
+<li>Highest resolution faces: 450x500 (Unconstrained College Students)</li>
+<li>Lowest resolution faces: 16x20 pixels (QMUL SurvFace)</li>
+</ul>
+</section>
+
+ </div>
+ <footer>
+ <div>
+ <a href="/">MegaPixels.cc</a>
+ <a href="/about/disclaimer/">Disclaimer</a>
+ <a href="/about/terms/">Terms of Use</a>
+ <a href="/about/privacy/">Privacy</a>
+ <a href="/about/">About</a>
+ <a href="/about/team/">Team</a>
+ </div>
+ <div>
+ MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
+ <a href="https://ahprojects.com">ahprojects.com</a>
+ </div>
+ </footer>
+</body>
+<script src="/assets/js/app/site.js"></script>
+</html> \ No newline at end of file
diff --git a/site/public/research/01_from_1_to_100_pixels/index.html b/site/public/research/01_from_1_to_100_pixels/index.html
new file mode 100644
index 00000000..90f142e9
--- /dev/null
+++ b/site/public/research/01_from_1_to_100_pixels/index.html
@@ -0,0 +1,101 @@
+<!doctype html>
+<html>
+<head>
+ <title>MegaPixels</title>
+ <meta charset="utf-8" />
+ <meta name="author" content="Adam Harvey" />
+ <meta name="description" content="High resolution insights from low resolution imagery" />
+ <meta name="referrer" content="no-referrer" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <link rel='stylesheet' href='/assets/css/fonts.css' />
+ <link rel='stylesheet' href='/assets/css/css.css' />
+</head>
+<body>
+ <header>
+ <a class='slogan' href="/">
+ <div class='logo'></div>
+ <div class='site_name'>MegaPixels</div>
+ <span class='sub'>The Darkside of Datasets</span>
+ </a>
+ <div class='links'>
+ <a href="/search">Face Search</a>
+ <a href="/datasets">Datasets</a>
+ <a href="/">Research</a>
+ <a href="/about">About</a>
+ </div>
+ </header>
+ <div class="content">
+
+ <section>
+ <h1>From 1 to 100 Pixels</h1>
+ <div class='meta'>
+ <div>
+ <div class='gray'>Posted</div>
+ <div>2018-12-04</div>
+ </div>
+ <div>
+ <div class='gray'>By</div>
+ <div>Adam Harvey</div>
+ </div>
+
+ </div>
+ </section>
+
+ <section><h2>High resolution insights from low resolution data</h2>
+<p>This post will be about the meaning of "face". How do people define it? How to biometrics researchers define it? How has it changed during the last decade.</p>
+<p>What can you know from a very small amount of information?</p>
+<ul>
+<li>1 pixel grayscale</li>
+<li>2x2 pixels grayscale, font example</li>
+<li>4x4 pixels</li>
+<li>8x8 yotta yotta</li>
+<li>5x7 face recognition</li>
+<li>12x16 activity recognition</li>
+<li>6/5 (up to 124/106) pixels in height/width, and the average is 24/20 for QMUL SurvFace</li>
+<li>20x16 tiny faces paper</li>
+<li>20x20 MNIST handwritten images <a href="http://yann.lecun.com/exdb/mnist/">http://yann.lecun.com/exdb/mnist/</a></li>
+<li>24x24 haarcascade detector idealized images</li>
+<li>32x32 CIFAR image dataset</li>
+<li>40x40 can do emotion detection, face recognition at scale, 3d modeling of the face. include datasets with faces at this resolution including pedestrian.</li>
+<li>need more material from 60-100</li>
+<li>60x60 show how texture emerges and pupils, eye color, higher resolution of features and compare to lower resolution faces</li>
+<li>100x100 0.5% of one Instagram photo</li>
+</ul>
+<p>Find specific cases of facial resolution being used in legal cases, forensic investigations, or military footage</p>
+<p>Research</p>
+<ul>
+<li>NIST report on sres states several resolutions</li>
+<li>"Results show that the tested face recognition systems yielded similar performance for query sets with eye-to-eye distance from 60 pixels to 30 pixels" <sup class="footnote-ref" id="fnref-nist_sres"><a href="#fn-nist_sres">1</a></sup></li>
+</ul>
+<div class="footnotes">
+<hr>
+<ol><li id="fn-nist_sres"><p>NIST 906932. Performance Assessment of Face Recognition Using Super-Resolution. Shuowen Hu, Robert Maschal, S. Susan Young, Tsai Hong Hong, Jonathon P. Phillips<a href="#fnref-nist_sres" class="footnote">&#8617;</a></p></li>
+</ol>
+</div>
+</section>
+
+ <section>
+ <h3>MORE RESEARCH</h3>
+ <div class='blogposts'>
+
+ </div>
+ </section>
+
+ </div>
+ <footer>
+ <div>
+ <a href="/">MegaPixels.cc</a>
+ <a href="/about/disclaimer/">Disclaimer</a>
+ <a href="/about/terms/">Terms of Use</a>
+ <a href="/about/privacy/">Privacy</a>
+ <a href="/about/">About</a>
+ <a href="/about/team/">Team</a>
+ </div>
+ <div>
+ MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
+ <a href="https://ahprojects.com">ahprojects.com</a>
+ </div>
+ </footer>
+</body>
+<script src="/assets/js/app/site.js"></script>
+</html> \ No newline at end of file