summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJules Laplace <julescarbon@gmail.com>2019-04-01 14:25:06 +0200
committerJules Laplace <julescarbon@gmail.com>2019-04-01 14:25:06 +0200
commit2d8b7dd6ea6ccb0293c8839898cf7a1246dc0eb4 (patch)
treee2423b11ba2ea30104673ce7786bb7411aad8229
parent95914436ee3b36e8c7b03941ca2cfd03a0404805 (diff)
rebuild
-rw-r--r--megapixels/app/site/builder.py2
-rw-r--r--megapixels/app/site/loader.py3
-rw-r--r--site/content/pages/datasets/uccs/index.md1
-rw-r--r--site/includes/map.html2
-rw-r--r--site/public/datasets/index.html2
-rw-r--r--site/public/research/01_from_1_to_100_pixels/index.html32
-rw-r--r--site/public/research/02_what_computers_can_see/index.html19
-rw-r--r--site/public/research/index.html18
8 files changed, 74 insertions, 5 deletions
diff --git a/megapixels/app/site/builder.py b/megapixels/app/site/builder.py
index 603d4788..55a85b0f 100644
--- a/megapixels/app/site/builder.py
+++ b/megapixels/app/site/builder.py
@@ -57,7 +57,7 @@ def build_page(fn, research_posts, datasets):
s3.sync_directory(dirname, s3_dir, metadata)
content = parser.parse_markdown(metadata, sections, s3_path, skip_h1=skip_h1)
-
+
html = template.render(
metadata=metadata,
content=content,
diff --git a/megapixels/app/site/loader.py b/megapixels/app/site/loader.py
index a544333b..d150942c 100644
--- a/megapixels/app/site/loader.py
+++ b/megapixels/app/site/loader.py
@@ -85,6 +85,9 @@ def parse_metadata(fn, sections):
metadata['meta'] = load_json(dataset_path)
if not metadata['meta']:
print("Bad metadata? {}".format(dataset_path))
+ else:
+ print(metadata['slug'])
+ print("{} does not exist!".format(dataset_path))
if 'meta' not in metadata or not metadata['meta']: # dude
metadata['meta'] = {}
diff --git a/site/content/pages/datasets/uccs/index.md b/site/content/pages/datasets/uccs/index.md
index e0925e07..1e3ec097 100644
--- a/site/content/pages/datasets/uccs/index.md
+++ b/site/content/pages/datasets/uccs/index.md
@@ -6,6 +6,7 @@ desc: <span class="dataset-name">Unconstrained College Students (UCCS)</span> is
subdesc: The UCCS dataset includes 16,149 images and 1,732 identities of students at University of Colorado Colorado Springs campus and is used for face recognition and face detection
cssclass: dataset
image: assets/background.jpg
+slug: uccs
published: 2019-2-23
updated: 2019-2-23
authors: Adam Harvey
diff --git a/site/includes/map.html b/site/includes/map.html
index 30c248a6..7511d4c7 100644
--- a/site/includes/map.html
+++ b/site/includes/map.html
@@ -12,7 +12,7 @@
</div>
-->
<p>
- To help understand how {{ metadata.meta.dataset.name_display }} has been used around the world for commercial, military and academic research; publicly available research citing {{ metadata.meta.dataset.name_full} is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location.
+ To help understand how {{ metadata.meta.dataset.name_display }} has been used around the world for commercial, military and academic research; publicly available research citing {{ metadata.meta.dataset.name_full }} is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location.
</p>
</section>
diff --git a/site/public/datasets/index.html b/site/public/datasets/index.html
index 03b38f8a..1d2630e1 100644
--- a/site/public/datasets/index.html
+++ b/site/public/datasets/index.html
@@ -28,7 +28,7 @@
<section><h1>Facial Recognition Datasets</h1>
-<h3>Survey</h3>
+<p>Explore publicly available facial recognition datasets. More datasets will be added throughout 2019.</p>
</section>
<section class='applet_container autosize'><div class='applet' data-payload='{"command":"dataset_list"}'></div></section>
diff --git a/site/public/research/01_from_1_to_100_pixels/index.html b/site/public/research/01_from_1_to_100_pixels/index.html
index c91d17ad..37fc367f 100644
--- a/site/public/research/01_from_1_to_100_pixels/index.html
+++ b/site/public/research/01_from_1_to_100_pixels/index.html
@@ -80,6 +80,38 @@
<li>"Note that we only keep the images with a minimal side length of 80 pixels." and "a face will be labeled as “Ignore” if it is very difficult to be detected due to blurring, severe deformation and unrecognizable eyes, or the side length of its bounding box is less than 32 pixels." Ge_Detecting_Masked_Faces_CVPR_2017_paper.pdf </li>
<li>IBM DiF: "Faces with region size less than 50x50 or inter-ocular distance of less than 30 pixels were discarded. Faces with non-frontal pose, or anything beyond being slightly tilted to the left or the right, were also discarded."</li>
</ul>
+<p>As the resolution
+formatted as rectangular databases of 16 bit RGB-tuples or 8 bit grayscale values</p>
+<p>To consider how visual privacy applies to real world surveillance situations, the first</p>
+<p>A single 8-bit grayscale pixel with 256 values is enough to represent the entire alphabet <code>a-Z0-9</code> with room to spare.</p>
+<p>A 2x2 pixels contains</p>
+<p>Using no more than a 42 pixel (6x7 image) face image researchers [cite] were able to correctly distinguish between a group of 50 people. Yet</p>
+<p>The likely outcome of face recognition research is that more data is needed to improve. Indeed, resolution is the determining factor for all biometric systems, both as training data to increase</p>
+<p>Pixels, typically considered the buiding blocks of images and vidoes, can also be plotted as a graph of sensor values corresponding to the intensity of RGB-calibrated sensors.</p>
+<p>Wi-Fi and cameras presents elevated risks for transmitting videos and image documentation from conflict zones, high-risk situations, or even sharing on social media. How can new developments in computer vision also be used in reverse, as a counter-forensic tool, to minimize an individual's privacy risk?</p>
+<p>As the global Internet becomes increasingly effecient at turning the Internet into a giant dataset for machine learning, forensics, and data analysing, it would be prudent to also consider tools for decreasing the resolution. The Visual Defense module is just that. What are new ways to minimize the adverse effects of surveillance by dulling the blade. For example, a researcher paper showed that by decreasing a face size to 12x16 it was possible to do 98% accuracy with 50 people. This is clearly an example of</p>
+<p>This research module, tentatively called Visual Defense Tools, aims to explore the</p>
+<h3>Prior Research</h3>
+<ul>
+<li>MPI visual privacy advisor</li>
+<li>NIST: super resolution</li>
+<li>YouTube blur tool</li>
+<li>WITNESS: blur tool</li>
+<li>Pixellated text </li>
+<li>CV Dazzle</li>
+<li>Bellingcat guide to geolocation</li>
+<li>Peng! magic passport</li>
+</ul>
+<h3>Notes</h3>
+<ul>
+<li>In China, out of the approximately 200 million surveillance cameras only about 15% have enough resolution for face recognition. </li>
+<li>In Apple's FaceID security guide, the probability of someone else's face unlocking your phone is 1 out of 1,000,000. </li>
+<li>In England, the Metropolitan Police reported a false-positive match rate of 98% when attempting to use face recognition to locate wanted criminals. </li>
+<li>In a face recognition trial at Berlin's Sudkreuz station, the false-match rate was 20%. </li>
+</ul>
+<p>What all 3 examples illustrate is that face recognition is anything but absolute. In a 2017 talk, Jason Matheny the former directory of IARPA, admitted the face recognition is so brittle it can be subverted by using a magic marker and drawing "a few dots on your forehead". In fact face recognition is a misleading term. Face recognition is search engine for faces that can only ever show you the mos likely match. This presents real a real threat to privacy and lends</p>
+<p>Globally, iPhone users unwittingly agree to 1/1,000,000 probably
+relying on FaceID and TouchID to protect their information agree to a</p>
<div class="footnotes">
<hr>
<ol><li id="fn-nist_sres"><p>NIST 906932. Performance Assessment of Face Recognition Using Super-Resolution. Shuowen Hu, Robert Maschal, S. Susan Young, Tsai Hong Hong, Jonathon P. Phillips<a href="#fnref-nist_sres" class="footnote">&#8617;</a></p></li>
diff --git a/site/public/research/02_what_computers_can_see/index.html b/site/public/research/02_what_computers_can_see/index.html
index 9389bf84..0fce1373 100644
--- a/site/public/research/02_what_computers_can_see/index.html
+++ b/site/public/research/02_what_computers_can_see/index.html
@@ -126,6 +126,7 @@
<li>Wearing Necktie</li>
<li>Wearing Necklace</li>
</ul>
+<p>for i in {1..9};do wget <a href="http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_0$i.MP4;done;for">http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_0$i.MP4;done;for</a> i in {10..20}; do wget <a href="http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_$i.MP4;done">http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_$i.MP4;done</a></p>
<h2>From Market 1501</h2>
<p>The 27 attributes are:</p>
<table>
@@ -269,6 +270,24 @@ Visibility boolean for each keypoint
Region annotations (upper clothes, lower clothes, dress, socks, shoes, hands, gloves, neck, face, hair, hat, sunglasses, bag, occluder)
Body type (male, female or child)</p>
<p>source: <a href="https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/shape/h3d/">https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/shape/h3d/</a></p>
+<h2>From Leeds Sports Pose</h2>
+<p>=INDEX(A2:A9,MATCH(datasets!D1,B2:B9,0))
+=VLOOKUP(A2, datasets!A:J, 7, FALSE)</p>
+<p>Right ankle
+Right knee
+Right hip
+Left hip
+Left knee
+Left ankle
+Right wrist
+Right elbow
+Right shoulder
+Left shoulder
+Left elbow
+Left wrist
+Neck
+Head top</p>
+<p>source: <a href="http://web.archive.org/web/20170915023005/sam.johnson.io/research/lsp.html">http://web.archive.org/web/20170915023005/sam.johnson.io/research/lsp.html</a></p>
</section>
</div>
diff --git a/site/public/research/index.html b/site/public/research/index.html
index 303732f8..0ef57043 100644
--- a/site/public/research/index.html
+++ b/site/public/research/index.html
@@ -26,8 +26,22 @@
</header>
<div class="content content-">
- <section><h1>Research Blog</h1>
-</section>
+ <section>
+ <h1>Research</h1>
+ <div class='meta'>
+ <div>
+ <div class='gray'>Posted</div>
+ <div>2018-12-15</div>
+ </div>
+ <div>
+ <div class='gray'>By</div>
+ <div>Adam Harvey</div>
+ </div>
+
+ </div>
+ </section>
+
+
</div>
<footer>