summaryrefslogtreecommitdiff
path: root/site/public/research/01_from_1_to_100_pixels
diff options
context:
space:
mode:
authoradamhrv <adam@ahprojects.com>2019-04-15 14:08:35 +0200
committeradamhrv <adam@ahprojects.com>2019-04-15 14:08:35 +0200
commit828ab34ca5e01e03e055ef9e091a88cd516a6061 (patch)
tree6671cc305526d6acbb4e4166ef06ead6e7126d7b /site/public/research/01_from_1_to_100_pixels
parentcc60ee511cc86d00ed0f13476513f2e183382763 (diff)
fix up duke
Diffstat (limited to 'site/public/research/01_from_1_to_100_pixels')
-rw-r--r--site/public/research/01_from_1_to_100_pixels/index.html139
1 files changed, 139 insertions, 0 deletions
diff --git a/site/public/research/01_from_1_to_100_pixels/index.html b/site/public/research/01_from_1_to_100_pixels/index.html
new file mode 100644
index 00000000..9426ef0f
--- /dev/null
+++ b/site/public/research/01_from_1_to_100_pixels/index.html
@@ -0,0 +1,139 @@
+<!doctype html>
+<html>
+<head>
+ <title>MegaPixels</title>
+ <meta charset="utf-8" />
+ <meta name="author" content="Adam Harvey" />
+ <meta name="description" content="High resolution insights from low resolution imagery" />
+ <meta name="referrer" content="no-referrer" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <link rel='stylesheet' href='/assets/css/fonts.css' />
+ <link rel='stylesheet' href='/assets/css/css.css' />
+ <link rel='stylesheet' href='/assets/css/leaflet.css' />
+ <link rel='stylesheet' href='/assets/css/applets.css' />
+</head>
+<body>
+ <header>
+ <a class='slogan' href="/">
+ <div class='logo'></div>
+ <div class='site_name'>MegaPixels</div>
+
+ </a>
+ <div class='links'>
+ <a href="/datasets/">Datasets</a>
+ <a href="/about/">About</a>
+ </div>
+ </header>
+ <div class="content content-">
+
+ <section>
+ <h1>From 1 to 100 Pixels</h1>
+ <div class='meta'>
+ <div>
+ <div class='gray'>Posted</div>
+ <div>2018-12-04</div>
+ </div>
+ <div>
+ <div class='gray'>By</div>
+ <div>Adam Harvey</div>
+ </div>
+
+ </div>
+ </section>
+
+ <section><h3>High resolution insights from low resolution data</h3>
+<p>This post will be about the meaning of "face". How do people define it? How to biometrics researchers define it? How has it changed during the last decade.</p>
+<p>What can you know from a very small amount of information?</p>
+<ul>
+<li>1 pixel grayscale</li>
+<li>2x2 pixels grayscale, font example, can encode letters</li>
+<li>3x3 pixels: can create a font</li>
+<li>4x4 pixels: how many variations</li>
+<li>8x8 yotta yotta, many more variations</li>
+<li>5x7 face recognition </li>
+<li>12x16 activity recognition</li>
+<li>6/5 (up to 124/106) pixels in height/width, and the average is 24/20 for QMUL SurvFace</li>
+<li>(prepare a Progan render of the QMUL dataset and TinyFaces)</li>
+<li>20x16 tiny faces paper</li>
+<li>20x20 MNIST handwritten images <a href="http://yann.lecun.com/exdb/mnist/">http://yann.lecun.com/exdb/mnist/</a></li>
+<li>24x24 haarcascade detector idealized images</li>
+<li>32x32 CIFAR image dataset</li>
+<li>40x40 can do emotion detection, face recognition at scale, 3d modeling of the face. include datasets with faces at this resolution including pedestrian.</li>
+<li>NIST standards begin to appear from 40x40, distinguish occular pixels</li>
+<li>need more material from 60-100</li>
+<li>60x60 show how texture emerges and pupils, eye color, higher resolution of features and compare to lower resolution faces</li>
+<li>100x100 all you need for medical diagnosis</li>
+<li>100x100 0.5% of one Instagram photo</li>
+</ul>
+<p>Ideas:</p>
+<ul>
+<li>Find specific cases of facial resolution being used in legal cases, forensic investigations, or military footage</li>
+<li>resolution of boston bomber face</li>
+<li>resolution of the state of the union image</li>
+</ul>
+<h3>Research</h3>
+<ul>
+<li>NIST report on sres states several resolutions</li>
+<li>"Results show that the tested face recognition systems yielded similar performance for query sets with eye-to-eye distance from 60 pixels to 30 pixels" <sup class="footnote-ref" id="fnref-nist_sres"><a href="#fn-nist_sres">1</a></sup></li>
+</ul>
+<ul>
+<li>"Note that we only keep the images with a minimal side length of 80 pixels." and "a face will be labeled as “Ignore” if it is very difficult to be detected due to blurring, severe deformation and unrecognizable eyes, or the side length of its bounding box is less than 32 pixels." Ge_Detecting_Masked_Faces_CVPR_2017_paper.pdf </li>
+<li>IBM DiF: "Faces with region size less than 50x50 or inter-ocular distance of less than 30 pixels were discarded. Faces with non-frontal pose, or anything beyond being slightly tilted to the left or the right, were also discarded."</li>
+</ul>
+<p>As the resolution
+formatted as rectangular databases of 16 bit RGB-tuples or 8 bit grayscale values</p>
+<p>To consider how visual privacy applies to real world surveillance situations, the first</p>
+<p>A single 8-bit grayscale pixel with 256 values is enough to represent the entire alphabet <code>a-Z0-9</code> with room to spare.</p>
+<p>A 2x2 pixels contains</p>
+<p>Using no more than a 42 pixel (6x7 image) face image researchers [cite] were able to correctly distinguish between a group of 50 people. Yet</p>
+<p>The likely outcome of face recognition research is that more data is needed to improve. Indeed, resolution is the determining factor for all biometric systems, both as training data to increase</p>
+<p>Pixels, typically considered the buiding blocks of images and vidoes, can also be plotted as a graph of sensor values corresponding to the intensity of RGB-calibrated sensors.</p>
+<p>Wi-Fi and cameras presents elevated risks for transmitting videos and image documentation from conflict zones, high-risk situations, or even sharing on social media. How can new developments in computer vision also be used in reverse, as a counter-forensic tool, to minimize an individual's privacy risk?</p>
+<p>As the global Internet becomes increasingly effecient at turning the Internet into a giant dataset for machine learning, forensics, and data analysing, it would be prudent to also consider tools for decreasing the resolution. The Visual Defense module is just that. What are new ways to minimize the adverse effects of surveillance by dulling the blade. For example, a researcher paper showed that by decreasing a face size to 12x16 it was possible to do 98% accuracy with 50 people. This is clearly an example of</p>
+<p>This research module, tentatively called Visual Defense Tools, aims to explore the</p>
+<h3>Prior Research</h3>
+<ul>
+<li>MPI visual privacy advisor</li>
+<li>NIST: super resolution</li>
+<li>YouTube blur tool</li>
+<li>WITNESS: blur tool</li>
+<li>Pixellated text </li>
+<li>CV Dazzle</li>
+<li>Bellingcat guide to geolocation</li>
+<li>Peng! magic passport</li>
+</ul>
+<h3>Notes</h3>
+<ul>
+<li>In China, out of the approximately 200 million surveillance cameras only about 15% have enough resolution for face recognition. </li>
+<li>In Apple's FaceID security guide, the probability of someone else's face unlocking your phone is 1 out of 1,000,000. </li>
+<li>In England, the Metropolitan Police reported a false-positive match rate of 98% when attempting to use face recognition to locate wanted criminals. </li>
+<li>In a face recognition trial at Berlin's Sudkreuz station, the false-match rate was 20%. </li>
+</ul>
+<p>What all 3 examples illustrate is that face recognition is anything but absolute. In a 2017 talk, Jason Matheny the former directory of IARPA, admitted the face recognition is so brittle it can be subverted by using a magic marker and drawing "a few dots on your forehead". In fact face recognition is a misleading term. Face recognition is search engine for faces that can only ever show you the mos likely match. This presents real a real threat to privacy and lends</p>
+<p>Globally, iPhone users unwittingly agree to 1/1,000,000 probably
+relying on FaceID and TouchID to protect their information agree to a</p>
+<div class="footnotes">
+<hr>
+<ol><li id="fn-nist_sres"><p>NIST 906932. Performance Assessment of Face Recognition Using Super-Resolution. Shuowen Hu, Robert Maschal, S. Susan Young, Tsai Hong Hong, Jonathon P. Phillips<a href="#fnref-nist_sres" class="footnote">&#8617;</a></p></li>
+</ol>
+</div>
+</section>
+
+ </div>
+ <footer>
+ <div>
+ <a href="/">MegaPixels.cc</a>
+ <a href="/datasets/">Datasets</a>
+ <a href="/about/">About</a>
+ <a href="/about/press/">Press</a>
+ <a href="/about/legal/">Legal and Privacy</a>
+ </div>
+ <div>
+ MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
+ <a href="https://ahprojects.com">ahprojects.com</a>
+ </div>
+ </footer>
+</body>
+
+<script src="/assets/js/dist/index.js"></script>
+</html> \ No newline at end of file