summaryrefslogtreecommitdiff
path: root/site
diff options
context:
space:
mode:
authoradamhrv <adam@ahprojects.com>2019-04-17 22:46:34 +0200
committeradamhrv <adam@ahprojects.com>2019-04-17 22:46:34 +0200
commit2813b772c8a088307f7a1ab9df167875d320162d (patch)
tree44ef026ce7b3f4f8d8070580a5c0c37314d109a1 /site
parent61fbcb8f2709236f36a103a73e0bd9d1dd3723e8 (diff)
update duke
Diffstat (limited to 'site')
-rw-r--r--site/assets/css/css.css60
-rw-r--r--site/content/pages/about/attribution.md2
-rw-r--r--site/content/pages/about/index.md13
-rw-r--r--site/content/pages/datasets/brainwash/index.md14
-rw-r--r--site/content/pages/datasets/duke_mtmc/index.md4
-rw-r--r--site/content/pages/datasets/msceleb/index.md28
-rw-r--r--site/content/pages/datasets/oxford_town_centre/index.md6
-rw-r--r--site/content/pages/datasets/uccs/index.md4
-rw-r--r--site/content/pages/research/02_what_computers_can_see/index.md7
-rw-r--r--site/public/about/attribution/index.html2
-rw-r--r--site/public/datasets/50_people_one_question/index.html2
-rw-r--r--site/public/datasets/afad/index.html2
-rw-r--r--site/public/datasets/brainwash/index.html6
-rw-r--r--site/public/datasets/caltech_10k/index.html2
-rw-r--r--site/public/datasets/celeba/index.html2
-rw-r--r--site/public/datasets/cofw/index.html4
-rw-r--r--site/public/datasets/duke_mtmc/index.html2
-rw-r--r--site/public/datasets/feret/index.html2
-rw-r--r--site/public/datasets/lfpw/index.html2
-rw-r--r--site/public/datasets/lfw/index.html2
-rw-r--r--site/public/datasets/market_1501/index.html2
-rw-r--r--site/public/datasets/msceleb/index.html18
-rw-r--r--site/public/datasets/oxford_town_centre/index.html8
-rw-r--r--site/public/datasets/pipa/index.html2
-rw-r--r--site/public/datasets/pubfig/index.html2
-rw-r--r--site/public/datasets/uccs/index.html2
-rw-r--r--site/public/datasets/vgg_face2/index.html2
-rw-r--r--site/public/datasets/viper/index.html2
-rw-r--r--site/public/datasets/youtube_celebrities/index.html2
-rw-r--r--site/public/research/02_what_computers_can_see/index.html4
30 files changed, 134 insertions, 76 deletions
diff --git a/site/assets/css/css.css b/site/assets/css/css.css
index 492ec347..e1acf5ac 100644
--- a/site/assets/css/css.css
+++ b/site/assets/css/css.css
@@ -111,20 +111,20 @@ header .links a {
margin-right: 32px;
transition: color 0.1s cubic-bezier(0,0,1,1), border-color 0.05s cubic-bezier(0,0,1,1);
border-bottom: 1px solid rgba(255,255,255,0);
- padding: 3px;
+ padding-bottom: 3px;
font-weight: 400;
}
header .links a.active {
color: #fff;
- border-bottom: 1px solid rgba(255,255,255,255);
+ border-bottom: 2px solid rgba(255,255,255,255);
}
.desktop header .links a:hover {
color: #fff;
- border-bottom: 1px solid rgba(255,255,255,255);
+ border-bottom: 2px solid rgba(255,255,255,255);
}
.desktop header .links a.active:hover {
color: #fff;
- border-bottom: 1px solid rgba(255,255,255,255);
+ border-bottom: 2px solid rgba(255,255,255,255);
}
header .links.splash{
font-size:22px;
@@ -139,10 +139,10 @@ footer {
display: flex;
flex-direction: row;
justify-content: space-between;
- color: #888;
- font-size: 9pt;
+ color: #666;
+ font-size: 11px;
line-height: 17px;
- padding: 20px 0 20px;
+ padding: 15px;
font-family: "Roboto", sans-serif;
}
footer > div {
@@ -157,14 +157,36 @@ footer > div:nth-child(2) {
}
footer a {
display: inline-block;
- color: #888;
+ color: #ccc;
transition: color 0.1s cubic-bezier(0,0,1,1);
- margin-right: 5px;
+ border-bottom:1px solid #555;
+ padding-bottom: 1px;
+ text-decoration: none;
}
-footer a:hover {
- color: #ddd;
+footer a:hover{
+ color: #ccc;
+ border-bottom:1px solid #999;
+}
+footer ul{
+ margin:0;
+}
+footer ul li{
+ color: #bbb;
+ margin: 0 5px 0 0;
+ font-size: 12px;
+ display: inline-block;
+}
+footer ul li:last-child{
+ margin-right:0px;
+}
+footer ul.footer-left{
+ float:left;
+ margin-left:40px;
+}
+footer ul.footer-right{
+ float:right;
+ margin-right:40px;
}
-
/* headings */
h1 {
@@ -286,7 +308,7 @@ p.subp{
font-size: 14px;
}
.content a {
- color: #fff;
+ color: #dedede;
text-decoration: none;
border-bottom: 2px solid #666;
padding-bottom: 1px;
@@ -731,6 +753,7 @@ section.fullwidth .image {
display: flex;
flex-direction: row;
flex-wrap: wrap;
+ margin:0;
}
.dataset-list a {
text-decoration: none;
@@ -1063,18 +1086,19 @@ ul.map-legend li.source:before {
/* footnotes */
a.footnote {
- font-size: 10px;
+ font-size: 9px;
+ line-height: 0px;
position: relative;
- display: inline-block;
- bottom: 10px;
+ /*display: inline-block;*/
+ bottom: 7px;
text-decoration: none;
color: #ff8;
border: 0;
- left: 2px;
+ left: -1px;
transition-duration: 0s;
}
a.footnote_shim {
- display: inline-block;
+ /*display: inline-block;*/
width: 1px; height: 1px;
overflow: hidden;
position: relative;
diff --git a/site/content/pages/about/attribution.md b/site/content/pages/about/attribution.md
index cf537ad4..5060b2d9 100644
--- a/site/content/pages/about/attribution.md
+++ b/site/content/pages/about/attribution.md
@@ -32,7 +32,7 @@ If you use the MegaPixels data or any data derived from it, please cite the orig
title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets},
year = 2019,
url = {https://megapixels.cc/},
- urldate = {2019-04-20}
+ urldate = {2019-04-18}
}
</pre>
diff --git a/site/content/pages/about/index.md b/site/content/pages/about/index.md
index f68008cc..a6ce3d3d 100644
--- a/site/content/pages/about/index.md
+++ b/site/content/pages/about/index.md
@@ -24,8 +24,9 @@ authors: Adam Harvey
MegaPixels is an independent art and research project by Adam Harvey and Jules LaPlace that investigates the ethics, origins, and individual privacy implications of face recognition image datasets and their role in the expansion of biometric surveillance technologies.
+MegaPixels is made possible with support from <a href="http://mozilla.org">Mozilla</a>, our primary funding partner.
-The MegaPixels site is made possible with support from <a href="http://mozilla.org">Mozilla</a>
+Additional support for MegaPixels is provided by the European ARTificial Intelligence Network (AI LAB) at the Ars Electronica Center, 1-year research-in-residence grant from Karlsruhe HfG, and sales from the Privacy Gift Shop.
<div class="flex-container team-photos-container">
@@ -85,6 +86,16 @@ You are free:
Please direct questions, comments, or feedback to [mastodon.social/@adamhrv](https://mastodon.social/@adamhrv)
+#### Funding Partners
+
+The MegaPixels website, research, and development is made possible with support form Mozilla, our primary funding partner.
+
+[ add logos ]
+
+Additional support is provided by the European ARTificial Intelligence Network (AI LAB) at the Ars Electronica Center and a 1-year research-in-residence grant from Karlsruhe HfG.
+
+[ add logos ]
+
##### Attribution
If you use MegaPixels or any data derived from it for your work, please cite our original work as follows:
diff --git a/site/content/pages/datasets/brainwash/index.md b/site/content/pages/datasets/brainwash/index.md
index b57bcdf4..75b0c006 100644
--- a/site/content/pages/datasets/brainwash/index.md
+++ b/site/content/pages/datasets/brainwash/index.md
@@ -8,8 +8,8 @@ slug: brainwash
cssclass: dataset
image: assets/background.jpg
year: 2015
-published: 2019-2-23
-updated: 2019-2-23
+published: 2019-4-18
+updated: 2019-4-18
authors: Adam Harvey
------------
@@ -25,24 +25,20 @@ The Brainwash dataset is unique because it uses images from a publicly available
Although Brainwash appears to be a less popular dataset, it was used in 2016 and 2017 by researchers from the National University of Defense Technology in China took note of the dataset and used it for two [research](https://www.semanticscholar.org/paper/Localized-region-context-and-object-feature-fusion-Li-Dou/b02d31c640b0a31fb18c4f170d841d8e21ffb66c) [projects](https://www.semanticscholar.org/paper/A-Replacement-Algorithm-of-Non-Maximum-Suppression-Zhao-Wang/591a4bfa6380c9fcd5f3ae690e3ac5c09b7bf37b) on advancing the capabilities of object detection to more accurately isolate the target region in an image ([PDF](https://www.itm-conferences.org/articles/itmconf/pdf/2017/04/itmconf_ita2017_05006.pdf)). [^localized_region_context] [^replacement_algorithm]. The dataset also appears in a 2017 [research paper](https://ieeexplore.ieee.org/document/7877809) from Peking University for the purpose of improving surveillance capabilities for "people detection in the crowded scenes".
-
-![caption: A visualization of 81,973 head annotations from the Brainwash dataset training partition. &copy; megapixels.cc](assets/brainwash_grid.jpg)
+![caption: A visualization of 81,973 head annotations from the Brainwash dataset training partition. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)](assets/brainwash_grid.jpg)
{% include 'dashboard.html' %}
{% include 'supplementary_header.html' %}
+![caption: An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains 11,916 more images like this one. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)](assets/brainwash_example.jpg)
-![caption: An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains about 12,000 images. License: Open Data Commons Public Domain Dedication (PDDL)](assets/brainwash_example.jpg)
-
-![caption: A visualization of 81,973 head annotations from the Brainwash dataset training partition. &copy; megapixels.cc](assets/brainwash_saliency_map.jpg)
-
+![caption: A visualization of the active regions for 81,973 head annotations from the Brainwash dataset training partition. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)](assets/brainwash_saliency_map.jpg)
{% include 'cite_our_work.html' %}
### Footnotes
-
[^readme]: "readme.txt" https://exhibits.stanford.edu/data/catalog/sx925dc9385.
[^end_to_end]: Stewart, Russel. Andriluka, Mykhaylo. "End-to-end people detection in crowded scenes". 2016.
[^localized_region_context]: Li, Y. and Dou, Y. and Liu, X. and Li, T. Localized Region Context and Object Feature Fusion for People Head Detection. ICIP16 Proceedings. 2016. Pages 594-598.
diff --git a/site/content/pages/datasets/duke_mtmc/index.md b/site/content/pages/datasets/duke_mtmc/index.md
index 1dd189ac..69bc5aa7 100644
--- a/site/content/pages/datasets/duke_mtmc/index.md
+++ b/site/content/pages/datasets/duke_mtmc/index.md
@@ -7,8 +7,8 @@ subdesc: Duke MTMC contains over 2 million video frames and 2,700 unique identit
slug: duke_mtmc
cssclass: dataset
image: assets/background.jpg
-published: 2019-2-23
-updated: 2019-2-23
+published: 2019-4-18
+updated: 2019-4-18
authors: Adam Harvey
------------
diff --git a/site/content/pages/datasets/msceleb/index.md b/site/content/pages/datasets/msceleb/index.md
index d5e52952..4c9f1576 100644
--- a/site/content/pages/datasets/msceleb/index.md
+++ b/site/content/pages/datasets/msceleb/index.md
@@ -8,8 +8,8 @@ slug: msceleb
cssclass: dataset
image: assets/background.jpg
year: 2015
-published: 2019-2-23
-updated: 2019-2-23
+published: 2019-4-18
+updated: 2019-4-18
authors: Adam Harvey
------------
@@ -19,10 +19,21 @@ authors: Adam Harvey
### sidebar
### end sidebar
+The Microsoft Celeb dataset is a face recognition training site made entirely of images scraped from the Internet. According to Microsoft Research who created and published the dataset in 2016, MS Celeb is the largest publicly available face recognition dataset in the world, containing over 10 million images of 100,000 individuals.
+
+But Microsoft's ambition was bigger. They wanted to recognize 1 million individuals. As part of their dataset they released a list of 1 million target identities for researchers to identity. The identities
+
+https://www.microsoft.com/en-us/research/publication/ms-celeb-1m-dataset-benchmark-large-scale-face-recognition-2/
+
+In 2019, Microsoft CEO Brad Smith called for the governmental regulation of face recognition, an admission of his own company's inability to control their surveillance-driven business model. Yet since then, and for the last 4 years, Microsoft has willingly and actively played a significant role in accelerating growth in the very same industry they called for the government to regulate. This investigation looks look into the [MS Celeb](https://www.microsoft.com/en-us/research/publication/ms-celeb-1m-dataset-benchmark-large-scale-face-recognition-2/) dataset and Microsoft Research's role in creating and distributing the largest publicly available face recognition dataset in the world to both.
+
+
+
+to spur growth and incentivize researchers, Microsoft released a dataset called [MS Celeb](https://msceleb.org), or Microsft Celeb, in which they developed and published a list of exactly 1 million targeted people whose biometrics would go on to build
+
+
-https://www.hrw.org/news/2019/01/15/letter-microsoft-face-surveillance-technology
-https://www.scmp.com/tech/science-research/article/3005733/what-you-need-know-about-sensenets-facial-recognition-firm
{% include 'dashboard.html' %}
@@ -30,11 +41,12 @@ https://www.scmp.com/tech/science-research/article/3005733/what-you-need-know-ab
### Additional Information
-- The dataset author spoke about his research at the CVPR conference in 2016 <https://www.youtube.com/watch?v=Nl2fBKxwusQ>
+- SenseTime https://www.semanticscholar.org/paper/The-Devil-of-Face-Recognition-is-in-the-Noise-Wang-Chen/9e31e77f9543ab42474ba4e9330676e18c242e72
+- Microsoft used it https://www.semanticscholar.org/paper/One-shot-Face-Recognition-by-Promoting-Classes-Guo/6cacda04a541d251e8221d70ac61fda88fb61a70
+- https://www.hrw.org/news/2019/01/15/letter-microsoft-face-surveillance-technology
+- https://www.scmp.com/tech/science-research/article/3005733/what-you-need-know-about-sensenets-facial-recognition-firm
### Footnotes
-[^readme]: "readme.txt" https://exhibits.stanford.edu/data/catalog/sx925dc9385.
-[^localized_region_context]: Li, Y. and Dou, Y. and Liu, X. and Li, T. Localized Region Context and Object Feature Fusion for People Head Detection. ICIP16 Proceedings. 2016. Pages 594-598.
-[^replacement_algorithm]: Zhao. X, Wang Y, Dou, Y. A Replacement Algorithm of Non-Maximum Suppression Base on Graph Clustering. \ No newline at end of file
+[^brad_smith]: Brad Smith cite \ No newline at end of file
diff --git a/site/content/pages/datasets/oxford_town_centre/index.md b/site/content/pages/datasets/oxford_town_centre/index.md
index c32cd022..21d3d949 100644
--- a/site/content/pages/datasets/oxford_town_centre/index.md
+++ b/site/content/pages/datasets/oxford_town_centre/index.md
@@ -19,7 +19,7 @@ authors: Adam Harvey
### sidebar
### end sidebar
-The Oxford Town Centre dataset is a CCTV video of pedestrians in a busy downtown area in Oxford used for research and development of activity and face recognition systems.[^ben_benfold_orig] The CCTV video was obtained from a public surveillance camera at the corner of Cornmarket and Market St. in Oxford, England and includes approximately 2,200 people. Since its publication in 2009[^guiding_surveillance] the Oxford Town Centre dataset has been used in over 80 verified research projects including commercial research by Amazon, Disney, OSRAM, and Huawei; and academic research in China, Israel, Russia, Singapore, the US, and Germany among dozens more.
+The Oxford Town Centre dataset is a CCTV video of pedestrians in a busy downtown area in Oxford used for research and development of activity and face recognition systems.[^ben_benfold_orig] The CCTV video was obtained from a surveillance camera at the corner of Cornmarket and Market St. in Oxford, England and includes approximately 2,200 people. Since its publication in 2009[^guiding_surveillance] the [Oxford Town Centre dataset](http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html) has been used in over 80 verified research projects including commercial research by Amazon, Disney, OSRAM, and Huawei; and academic research in China, Israel, Russia, Singapore, the US, and Germany among dozens more.
The Oxford Town Centre dataset is unique in that it uses footage from a public surveillance camera that would otherwise be designated for public safety. The video shows that the pedestrians act normally and unrehearsed indicating they neither knew of or consented to participation in the research project.
@@ -29,9 +29,9 @@ The Oxford Town Centre dataset is unique in that it uses footage from a public s
### Location
-The street location of the camera used for the Oxford Town Centre dataset was confirmed by matching the road, benches, and store signs [source](https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656). At that location, two public CCTV cameras exist mounted on the side of the Northgate House building at 13-20 Cornmarket St. Because of the lower camera's mounting pole directionality, a view from a private camera in the building across the street can be ruled out because it would have to show more of silhouette of the lower camera's mounting pole. Two options remain: either the public CCTV camera mounted to the side of the building was used or the researchers mounted their own camera to the side of the building in the same location. Because the researchers used many other existing public CCTV cameras for their [research projects](http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html) it is likely that they would also be able to access to this camera.
+The street location of the camera used for the Oxford Town Centre dataset was confirmed by matching the road, benches, and store signs [source](https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656). At that location, two public CCTV cameras exist mounted on the side of the Northgate House building at 13-20 Cornmarket St. Because of the lower camera's mounting pole directionality, a view from a private camera in the building across the street can be ruled out because it would have to show more of silhouette of the lower camera's mounting pole. Two options remain: either the public CCTV camera mounted to the side of the building was used or the researchers mounted their own camera to the side of the building in the same location. Because the researchers used many other existing public CCTV cameras for their [research projects](http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html) it is increasingly likely that they would also be able to access to this camera.
-To discredit the theory that this public CCTV is only seen pointing the other way in Google Street View images, at least one public photo shows the upper CCTV camera [pointing in the same direction](https://www.oxcivicsoc.org.uk/northgate-house-cornmarket/) as the Oxford Town Centre dataset proving the camera can and has been rotated before.
+Next, to discredit the theory that this public CCTV is only seen pointing the other way in Google Street View images, at least one public photo shows the upper CCTV camera [pointing in the same direction](https://www.oxcivicsoc.org.uk/northgate-house-cornmarket/) as the Oxford Town Centre dataset proving the camera can and has been rotated before.
As for the capture date, the text on the storefront display shows a sale happening from December 2nd &ndash; 7th indicating the capture date was between or just before those dates. The capture year is either 2008 or 2007 since prior to 2007 the Carphone Warehouse ([photo](https://www.flickr.com/photos/katieportwin/364492063/in/photolist-4meWFE-yd7rw-yd7X6-5sDHuc-yd7DN-59CpEK-5GoHAc-yd7Zh-3G2uJP-yd7US-5GomQH-4peYpq-4bAEwm-PALEr-58RkAp-5pHEkf-5v7fGq-4q1J9W-4kypQ2-5KX2Eu-yd7MV-yd7p6-4McgWb-5pJ55w-24N9gj-37u9LK-4FVcKQ-a81Enz-5qNhTG-59CrMZ-2yuwYM-5oagH5-59CdsP-4FVcKN-4PdxhC-5Lhr2j-2PAd2d-5hAwvk-zsQSG-4Cdr4F-3dUPEi-9B1RZ6-2hv5NY-4G5qwP-HCHBW-4JiuC4-4Pdr9Y-584aEV-2GYBEc-HCPkp/), [history](http://www.oxfordhistory.org.uk/cornmarket/west/47_51.html)) did not exist at this location. Since the sweaters in the GAP window display are more similar to those in a [GAP website snapshot](web.archive.org/web/20081201002524/http://www.gap.com/) from November 2007, our guess is that the footage was obtained during late November or early December 2007. The lack of street vendors and slight waste residue near the bench suggests that is was probably a weekday after rubbish removal.
diff --git a/site/content/pages/datasets/uccs/index.md b/site/content/pages/datasets/uccs/index.md
index b6073384..e9ea80c8 100644
--- a/site/content/pages/datasets/uccs/index.md
+++ b/site/content/pages/datasets/uccs/index.md
@@ -9,8 +9,8 @@ image: assets/background.jpg
cssclass: dataset
image: assets/background.jpg
slug: uccs
-published: 2019-2-23
-updated: 2019-4-15
+published: 2019-4-18
+updated: 2019-4-19
authors: Adam Harvey
------------
diff --git a/site/content/pages/research/02_what_computers_can_see/index.md b/site/content/pages/research/02_what_computers_can_see/index.md
index 51621f46..faa4ab17 100644
--- a/site/content/pages/research/02_what_computers_can_see/index.md
+++ b/site/content/pages/research/02_what_computers_can_see/index.md
@@ -25,6 +25,13 @@ A list of 100 things computer vision can see, eg:
- tired, drowsiness in car
- affectiva: interest in product, intent to buy
+## From SenseTime paper
+
+Exploring Disentangled Feature Representation Beyond Face Identification
+
+From https://arxiv.org/pdf/1804.03487.pdf
+The attribute IDs from 1 to 40 corre-spond to: ‘5 o Clock Shadow’, ‘Arched Eyebrows’, ‘Attrac-tive’, ‘Bags Under Eyes’, ‘Bald’, ‘Bangs’, ‘Big Lips’, ‘BigNose’, ‘Black Hair’, ‘Blond Hair’, ‘Blurry’, ‘Brown Hair’,‘Bushy Eyebrows’, ‘Chubby’, ‘Double Chin’, ‘Eyeglasses’,‘Goatee’, ‘Gray Hair’, ‘Heavy Makeup’, ‘High Cheek-bones’, ‘Male’, ‘Mouth Slightly Open’, ‘Mustache’, ‘Nar-row Eyes’, ‘No Beard’, ‘Oval Face’, ‘Pale Skin’, ‘PointyNose’, ‘Receding Hairline’, ‘Rosy Cheeks’, ‘Sideburns’,‘Smiling’, ‘Straight Hair’, ‘Wavy Hair’, ‘Wearing Ear-rings’, ‘Wearing Hat’, ‘Wearing Lipstick’, ‘Wearing Neck-lace’, ‘Wearing Necktie’ and ‘Young’. It’
+
## From PubFig Dataset
diff --git a/site/public/about/attribution/index.html b/site/public/about/attribution/index.html
index 5fe92b8d..7b09e5b4 100644
--- a/site/public/about/attribution/index.html
+++ b/site/public/about/attribution/index.html
@@ -42,7 +42,7 @@
title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets},
year = 2019,
url = {https://megapixels.cc/},
- urldate = {2019-04-20}
+ urldate = {2019-04-18}
}
</pre><p>and include this license and attribution protocol within any derivative work.</p>
<p>If you publish data derived from MegaPixels, the original dataset creators should first be notified.</p>
diff --git a/site/public/datasets/50_people_one_question/index.html b/site/public/datasets/50_people_one_question/index.html
index 79411122..76d5b92f 100644
--- a/site/public/datasets/50_people_one_question/index.html
+++ b/site/public/datasets/50_people_one_question/index.html
@@ -88,7 +88,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
diff --git a/site/public/datasets/afad/index.html b/site/public/datasets/afad/index.html
index 7969c1d6..a3ff00cf 100644
--- a/site/public/datasets/afad/index.html
+++ b/site/public/datasets/afad/index.html
@@ -90,7 +90,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
diff --git a/site/public/datasets/brainwash/index.html b/site/public/datasets/brainwash/index.html
index 0f782924..305935ac 100644
--- a/site/public/datasets/brainwash/index.html
+++ b/site/public/datasets/brainwash/index.html
@@ -52,7 +52,7 @@
</div></div><p>Brainwash is a dataset of livecam images taken from San Francisco's Brainwash Cafe. It includes 11,918 images of "everyday life of a busy downtown cafe"<a class="footnote_shim" name="[^readme]_1"> </a><a href="#[^readme]" class="footnote" title="Footnote 1">1</a> captured at 100 second intervals throught the entire day. The Brainwash dataset includes 3 full days of webcam images taken on October 27, November 13, and November 24 in 2014. According the author's <a href="https://www.semanticscholar.org/paper/End-to-End-People-Detection-in-Crowded-Scenes-Stewart-Andriluka/1bd1645a629f1b612960ab9bba276afd4cf7c666">reserach paper</a> introducing the dataset, the images were acquired with the help of Angelcam.com<a class="footnote_shim" name="[^end_to_end]_1"> </a><a href="#[^end_to_end]" class="footnote" title="Footnote 2">2</a></p>
<p>The Brainwash dataset is unique because it uses images from a publicly available webcam that records people inside a privately owned business without any consent. No ordinary cafe custom could ever suspect there image would end up in dataset used for surveillance reserach and development, but that is exactly what happened to customers at Brainwash cafe in San Francisco.</p>
<p>Although Brainwash appears to be a less popular dataset, it was used in 2016 and 2017 by researchers from the National University of Defense Technology in China took note of the dataset and used it for two <a href="https://www.semanticscholar.org/paper/Localized-region-context-and-object-feature-fusion-Li-Dou/b02d31c640b0a31fb18c4f170d841d8e21ffb66c">research</a> <a href="https://www.semanticscholar.org/paper/A-Replacement-Algorithm-of-Non-Maximum-Suppression-Zhao-Wang/591a4bfa6380c9fcd5f3ae690e3ac5c09b7bf37b">projects</a> on advancing the capabilities of object detection to more accurately isolate the target region in an image (<a href="https://www.itm-conferences.org/articles/itmconf/pdf/2017/04/itmconf_ita2017_05006.pdf">PDF</a>). <a class="footnote_shim" name="[^localized_region_context]_1"> </a><a href="#[^localized_region_context]" class="footnote" title="Footnote 3">3</a> <a class="footnote_shim" name="[^replacement_algorithm]_1"> </a><a href="#[^replacement_algorithm]" class="footnote" title="Footnote 4">4</a>. The dataset also appears in a 2017 <a href="https://ieeexplore.ieee.org/document/7877809">research paper</a> from Peking University for the purpose of improving surveillance capabilities for "people detection in the crowded scenes".</p>
-</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_grid.jpg' alt=' A visualization of 81,973 head annotations from the Brainwash dataset training partition. &copy; megapixels.cc'><div class='caption'> A visualization of 81,973 head annotations from the Brainwash dataset training partition. &copy; megapixels.cc</div></div></section><section>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_grid.jpg' alt=' A visualization of 81,973 head annotations from the Brainwash dataset training partition. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> A visualization of 81,973 head annotations from the Brainwash dataset training partition. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section>
<h3>Who used Brainwash Dataset?</h3>
<p>
@@ -99,7 +99,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -112,7 +112,7 @@
<h2>Supplementary Information</h2>
-</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_example.jpg' alt=' An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains about 12,000 images. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains about 12,000 images. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_saliency_map.jpg' alt=' A visualization of 81,973 head annotations from the Brainwash dataset training partition. &copy; megapixels.cc'><div class='caption'> A visualization of 81,973 head annotations from the Brainwash dataset training partition. &copy; megapixels.cc</div></div></section><section>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_example.jpg' alt=' An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains 11,916 more images like this one. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains 11,916 more images like this one. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_saliency_map.jpg' alt=' A visualization of the active regions for 81,973 head annotations from the Brainwash dataset training partition. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> A visualization of the active regions for 81,973 head annotations from the Brainwash dataset training partition. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section>
<h4>Cite Our Work</h4>
<p>
diff --git a/site/public/datasets/caltech_10k/index.html b/site/public/datasets/caltech_10k/index.html
index abb55148..e86c5ca3 100644
--- a/site/public/datasets/caltech_10k/index.html
+++ b/site/public/datasets/caltech_10k/index.html
@@ -96,7 +96,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
diff --git a/site/public/datasets/celeba/index.html b/site/public/datasets/celeba/index.html
index a4a7efa2..0236b91c 100644
--- a/site/public/datasets/celeba/index.html
+++ b/site/public/datasets/celeba/index.html
@@ -94,7 +94,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
diff --git a/site/public/datasets/cofw/index.html b/site/public/datasets/cofw/index.html
index c6d7417e..b0e73dac 100644
--- a/site/public/datasets/cofw/index.html
+++ b/site/public/datasets/cofw/index.html
@@ -87,7 +87,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -138,7 +138,7 @@ To increase the number of training images, and since COFW has the exact same la
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html
index bd4fb8d9..d49f621b 100644
--- a/site/public/datasets/duke_mtmc/index.html
+++ b/site/public/datasets/duke_mtmc/index.html
@@ -246,7 +246,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
diff --git a/site/public/datasets/feret/index.html b/site/public/datasets/feret/index.html
index 7f9ed94c..09abaee2 100644
--- a/site/public/datasets/feret/index.html
+++ b/site/public/datasets/feret/index.html
@@ -90,7 +90,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
diff --git a/site/public/datasets/lfpw/index.html b/site/public/datasets/lfpw/index.html
index a9eb025d..1238c8d3 100644
--- a/site/public/datasets/lfpw/index.html
+++ b/site/public/datasets/lfpw/index.html
@@ -83,7 +83,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
diff --git a/site/public/datasets/lfw/index.html b/site/public/datasets/lfw/index.html
index ff7a3cd9..45709810 100644
--- a/site/public/datasets/lfw/index.html
+++ b/site/public/datasets/lfw/index.html
@@ -97,7 +97,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
diff --git a/site/public/datasets/market_1501/index.html b/site/public/datasets/market_1501/index.html
index 05750dc7..a72cb6cf 100644
--- a/site/public/datasets/market_1501/index.html
+++ b/site/public/datasets/market_1501/index.html
@@ -91,7 +91,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
diff --git a/site/public/datasets/msceleb/index.html b/site/public/datasets/msceleb/index.html
index 86741647..1f037bae 100644
--- a/site/public/datasets/msceleb/index.html
+++ b/site/public/datasets/msceleb/index.html
@@ -49,8 +49,11 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.msceleb.org/' target='_blank' rel='nofollow noopener'>msceleb.org</a></div>
- </div></div><p><a href="https://www.hrw.org/news/2019/01/15/letter-microsoft-face-surveillance-technology">https://www.hrw.org/news/2019/01/15/letter-microsoft-face-surveillance-technology</a></p>
-<p><a href="https://www.scmp.com/tech/science-research/article/3005733/what-you-need-know-about-sensenets-facial-recognition-firm">https://www.scmp.com/tech/science-research/article/3005733/what-you-need-know-about-sensenets-facial-recognition-firm</a></p>
+ </div></div><p>The Microsoft Celeb dataset is a face recognition training site made entirely of images scraped from the Internet. According to Microsoft Research who created and published the dataset in 2016, MS Celeb is the largest publicly available face recognition dataset in the world, containing over 10 million images of 100,000 individuals.</p>
+<p>But Microsoft's ambition was bigger. They wanted to recognize 1 million individuals. As part of their dataset they released a list of 1 million target identities for researchers to identity. The identities</p>
+<p><a href="https://www.microsoft.com/en-us/research/publication/ms-celeb-1m-dataset-benchmark-large-scale-face-recognition-2/">https://www.microsoft.com/en-us/research/publication/ms-celeb-1m-dataset-benchmark-large-scale-face-recognition-2/</a></p>
+<p>In 2019, Microsoft CEO Brad Smith called for the governmental regulation of face recognition, an admission of his own company's inability to control their surveillance-driven business model. Yet since then, and for the last 4 years, Microsoft has willingly and actively played a significant role in accelerating growth in the very same industry they called for the government to regulate. This investigation looks look into the <a href="https://www.microsoft.com/en-us/research/publication/ms-celeb-1m-dataset-benchmark-large-scale-face-recognition-2/">MS Celeb</a> dataset and Microsoft Research's role in creating and distributing the largest publicly available face recognition dataset in the world to both.</p>
+<p>to spur growth and incentivize researchers, Microsoft released a dataset called <a href="https://msceleb.org">MS Celeb</a>, or Microsft Celeb, in which they developed and published a list of exactly 1 million targeted people whose biometrics would go on to build</p>
</section><section>
<h3>Who used Microsoft Celeb?</h3>
@@ -98,7 +101,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -113,11 +116,12 @@
</section><section><h3>Additional Information</h3>
<ul>
-<li>The dataset author spoke about his research at the CVPR conference in 2016 <a href="https://www.youtube.com/watch?v=Nl2fBKxwusQ">https://www.youtube.com/watch?v=Nl2fBKxwusQ</a></li>
+<li>SenseTime <a href="https://www.semanticscholar.org/paper/The-Devil-of-Face-Recognition-is-in-the-Noise-Wang-Chen/9e31e77f9543ab42474ba4e9330676e18c242e72">https://www.semanticscholar.org/paper/The-Devil-of-Face-Recognition-is-in-the-Noise-Wang-Chen/9e31e77f9543ab42474ba4e9330676e18c242e72</a></li>
+<li>Microsoft used it <a href="https://www.semanticscholar.org/paper/One-shot-Face-Recognition-by-Promoting-Classes-Guo/6cacda04a541d251e8221d70ac61fda88fb61a70">https://www.semanticscholar.org/paper/One-shot-Face-Recognition-by-Promoting-Classes-Guo/6cacda04a541d251e8221d70ac61fda88fb61a70</a></li>
+<li><a href="https://www.hrw.org/news/2019/01/15/letter-microsoft-face-surveillance-technology">https://www.hrw.org/news/2019/01/15/letter-microsoft-face-surveillance-technology</a></li>
+<li><a href="https://www.scmp.com/tech/science-research/article/3005733/what-you-need-know-about-sensenets-facial-recognition-firm">https://www.scmp.com/tech/science-research/article/3005733/what-you-need-know-about-sensenets-facial-recognition-firm</a></li>
</ul>
-</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^readme]" class="footnote_shim"></a><span class="backlinks"></span><p>"readme.txt" <a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">https://exhibits.stanford.edu/data/catalog/sx925dc9385</a>.</p>
-</li><li><a name="[^localized_region_context]" class="footnote_shim"></a><span class="backlinks"></span><p>Li, Y. and Dou, Y. and Liu, X. and Li, T. Localized Region Context and Object Feature Fusion for People Head Detection. ICIP16 Proceedings. 2016. Pages 594-598.</p>
-</li><li><a name="[^replacement_algorithm]" class="footnote_shim"></a><span class="backlinks"></span><p>Zhao. X, Wang Y, Dou, Y. A Replacement Algorithm of Non-Maximum Suppression Base on Graph Clustering.</p>
+</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^brad_smith]" class="footnote_shim"></a><span class="backlinks"></span><p>Brad Smith cite</p>
</li></ul></section></section>
</div>
diff --git a/site/public/datasets/oxford_town_centre/index.html b/site/public/datasets/oxford_town_centre/index.html
index 03d8934b..cf81e2ef 100644
--- a/site/public/datasets/oxford_town_centre/index.html
+++ b/site/public/datasets/oxford_town_centre/index.html
@@ -49,7 +49,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html' target='_blank' rel='nofollow noopener'>ox.ac.uk</a></div>
- </div></div><p>The Oxford Town Centre dataset is a CCTV video of pedestrians in a busy downtown area in Oxford used for research and development of activity and face recognition systems.<a class="footnote_shim" name="[^ben_benfold_orig]_1"> </a><a href="#[^ben_benfold_orig]" class="footnote" title="Footnote 1">1</a> The CCTV video was obtained from a public surveillance camera at the corner of Cornmarket and Market St. in Oxford, England and includes approximately 2,200 people. Since its publication in 2009<a class="footnote_shim" name="[^guiding_surveillance]_1"> </a><a href="#[^guiding_surveillance]" class="footnote" title="Footnote 2">2</a> the Oxford Town Centre dataset has been used in over 80 verified research projects including commercial research by Amazon, Disney, OSRAM, and Huawei; and academic research in China, Israel, Russia, Singapore, the US, and Germany among dozens more.</p>
+ </div></div><p>The Oxford Town Centre dataset is a CCTV video of pedestrians in a busy downtown area in Oxford used for research and development of activity and face recognition systems.<a class="footnote_shim" name="[^ben_benfold_orig]_1"> </a><a href="#[^ben_benfold_orig]" class="footnote" title="Footnote 1">1</a> The CCTV video was obtained from a surveillance camera at the corner of Cornmarket and Market St. in Oxford, England and includes approximately 2,200 people. Since its publication in 2009<a class="footnote_shim" name="[^guiding_surveillance]_1"> </a><a href="#[^guiding_surveillance]" class="footnote" title="Footnote 2">2</a> the <a href="http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html">Oxford Town Centre dataset</a> has been used in over 80 verified research projects including commercial research by Amazon, Disney, OSRAM, and Huawei; and academic research in China, Israel, Russia, Singapore, the US, and Germany among dozens more.</p>
<p>The Oxford Town Centre dataset is unique in that it uses footage from a public surveillance camera that would otherwise be designated for public safety. The video shows that the pedestrians act normally and unrehearsed indicating they neither knew of or consented to participation in the research project.</p>
</section><section>
<h3>Who used TownCentre?</h3>
@@ -98,7 +98,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
@@ -112,8 +112,8 @@
<h2>Supplementary Information</h2>
</section><section><h3>Location</h3>
-<p>The street location of the camera used for the Oxford Town Centre dataset was confirmed by matching the road, benches, and store signs <a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">source</a>. At that location, two public CCTV cameras exist mounted on the side of the Northgate House building at 13-20 Cornmarket St. Because of the lower camera's mounting pole directionality, a view from a private camera in the building across the street can be ruled out because it would have to show more of silhouette of the lower camera's mounting pole. Two options remain: either the public CCTV camera mounted to the side of the building was used or the researchers mounted their own camera to the side of the building in the same location. Because the researchers used many other existing public CCTV cameras for their <a href="http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html">research projects</a> it is likely that they would also be able to access to this camera.</p>
-<p>To discredit the theory that this public CCTV is only seen pointing the other way in Google Street View images, at least one public photo shows the upper CCTV camera <a href="https://www.oxcivicsoc.org.uk/northgate-house-cornmarket/">pointing in the same direction</a> as the Oxford Town Centre dataset proving the camera can and has been rotated before.</p>
+<p>The street location of the camera used for the Oxford Town Centre dataset was confirmed by matching the road, benches, and store signs <a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">source</a>. At that location, two public CCTV cameras exist mounted on the side of the Northgate House building at 13-20 Cornmarket St. Because of the lower camera's mounting pole directionality, a view from a private camera in the building across the street can be ruled out because it would have to show more of silhouette of the lower camera's mounting pole. Two options remain: either the public CCTV camera mounted to the side of the building was used or the researchers mounted their own camera to the side of the building in the same location. Because the researchers used many other existing public CCTV cameras for their <a href="http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html">research projects</a> it is increasingly likely that they would also be able to access to this camera.</p>
+<p>Next, to discredit the theory that this public CCTV is only seen pointing the other way in Google Street View images, at least one public photo shows the upper CCTV camera <a href="https://www.oxcivicsoc.org.uk/northgate-house-cornmarket/">pointing in the same direction</a> as the Oxford Town Centre dataset proving the camera can and has been rotated before.</p>
<p>As for the capture date, the text on the storefront display shows a sale happening from December 2nd &ndash; 7th indicating the capture date was between or just before those dates. The capture year is either 2008 or 2007 since prior to 2007 the Carphone Warehouse (<a href="https://www.flickr.com/photos/katieportwin/364492063/in/photolist-4meWFE-yd7rw-yd7X6-5sDHuc-yd7DN-59CpEK-5GoHAc-yd7Zh-3G2uJP-yd7US-5GomQH-4peYpq-4bAEwm-PALEr-58RkAp-5pHEkf-5v7fGq-4q1J9W-4kypQ2-5KX2Eu-yd7MV-yd7p6-4McgWb-5pJ55w-24N9gj-37u9LK-4FVcKQ-a81Enz-5qNhTG-59CrMZ-2yuwYM-5oagH5-59CdsP-4FVcKN-4PdxhC-5Lhr2j-2PAd2d-5hAwvk-zsQSG-4Cdr4F-3dUPEi-9B1RZ6-2hv5NY-4G5qwP-HCHBW-4JiuC4-4Pdr9Y-584aEV-2GYBEc-HCPkp/">photo</a>, <a href="http://www.oxfordhistory.org.uk/cornmarket/west/47_51.html">history</a>) did not exist at this location. Since the sweaters in the GAP window display are more similar to those in a <a href="web.archive.org/web/20081201002524/http://www.gap.com/">GAP website snapshot</a> from November 2007, our guess is that the footage was obtained during late November or early December 2007. The lack of street vendors and slight waste residue near the bench suggests that is was probably a weekday after rubbish removal.</p>
</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/oxford_town_centre_cctv.jpg' alt=' Footage from this public CCTV camera was used to create the Oxford Town Centre dataset. Image sources: Google Street View (<a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">map</a>)'><div class='caption'> Footage from this public CCTV camera was used to create the Oxford Town Centre dataset. Image sources: Google Street View (<a href="https://www.google.com/maps/@51.7528162,-1.2581152,3a,50.3y,310.59h,87.23t/data=!3m7!1e1!3m5!1s3FsGN-PqYC-VhQGjWgmBdQ!2e0!5s20120601T000000!7i13312!8i6656">map</a>)</div></div></section><section><div class='columns columns-'><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/oxford_town_centre_sal_body.jpg' alt=' Heat map body visualization of the pedestrians detected in the Oxford Town Centre dataset &copy; megapixels.cc'><div class='caption'> Heat map body visualization of the pedestrians detected in the Oxford Town Centre dataset &copy; megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/oxford_town_centre_sal_face.jpg' alt=' Heat map face visualization of the pedestrians detected in the Oxford Town Centre dataset &copy; megapixels.cc'><div class='caption'> Heat map face visualization of the pedestrians detected in the Oxford Town Centre dataset &copy; megapixels.cc</div></div></section></div></section><section>
diff --git a/site/public/datasets/pipa/index.html b/site/public/datasets/pipa/index.html
index ae8aef6d..297f4d45 100644
--- a/site/public/datasets/pipa/index.html
+++ b/site/public/datasets/pipa/index.html
@@ -94,7 +94,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
diff --git a/site/public/datasets/pubfig/index.html b/site/public/datasets/pubfig/index.html
index ef289954..5feed748 100644
--- a/site/public/datasets/pubfig/index.html
+++ b/site/public/datasets/pubfig/index.html
@@ -91,7 +91,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
diff --git a/site/public/datasets/uccs/index.html b/site/public/datasets/uccs/index.html
index 3652e329..5fdde7e1 100644
--- a/site/public/datasets/uccs/index.html
+++ b/site/public/datasets/uccs/index.html
@@ -104,7 +104,7 @@ Their setup made it impossible for students to know they were being photographed
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
diff --git a/site/public/datasets/vgg_face2/index.html b/site/public/datasets/vgg_face2/index.html
index 24ce4b2d..5f314d9e 100644
--- a/site/public/datasets/vgg_face2/index.html
+++ b/site/public/datasets/vgg_face2/index.html
@@ -96,7 +96,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
diff --git a/site/public/datasets/viper/index.html b/site/public/datasets/viper/index.html
index e4b2a05a..4d2abbe1 100644
--- a/site/public/datasets/viper/index.html
+++ b/site/public/datasets/viper/index.html
@@ -96,7 +96,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
diff --git a/site/public/datasets/youtube_celebrities/index.html b/site/public/datasets/youtube_celebrities/index.html
index e90b45cb..d0a7a172 100644
--- a/site/public/datasets/youtube_celebrities/index.html
+++ b/site/public/datasets/youtube_celebrities/index.html
@@ -75,7 +75,7 @@
<h3>Dataset Citations</h3>
<p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
</p>
<div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
diff --git a/site/public/research/02_what_computers_can_see/index.html b/site/public/research/02_what_computers_can_see/index.html
index d139e83e..aac0b723 100644
--- a/site/public/research/02_what_computers_can_see/index.html
+++ b/site/public/research/02_what_computers_can_see/index.html
@@ -52,6 +52,10 @@
<li>tired, drowsiness in car</li>
<li>affectiva: interest in product, intent to buy</li>
</ul>
+<h2>From SenseTime paper</h2>
+<p>Exploring Disentangled Feature Representation Beyond Face Identification</p>
+<p>From <a href="https://arxiv.org/pdf/1804.03487.pdf">https://arxiv.org/pdf/1804.03487.pdf</a>
+The attribute IDs from 1 to 40 corre-spond to: ‘5 o Clock Shadow’, ‘Arched Eyebrows’, ‘Attrac-tive’, ‘Bags Under Eyes’, ‘Bald’, ‘Bangs’, ‘Big Lips’, ‘BigNose’, ‘Black Hair’, ‘Blond Hair’, ‘Blurry’, ‘Brown Hair’,‘Bushy Eyebrows’, ‘Chubby’, ‘Double Chin’, ‘Eyeglasses’,‘Goatee’, ‘Gray Hair’, ‘Heavy Makeup’, ‘High Cheek-bones’, ‘Male’, ‘Mouth Slightly Open’, ‘Mustache’, ‘Nar-row Eyes’, ‘No Beard’, ‘Oval Face’, ‘Pale Skin’, ‘PointyNose’, ‘Receding Hairline’, ‘Rosy Cheeks’, ‘Sideburns’,‘Smiling’, ‘Straight Hair’, ‘Wavy Hair’, ‘Wearing Ear-rings’, ‘Wearing Hat’, ‘Wearing Lipstick’, ‘Wearing Neck-lace’, ‘Wearing Necktie’ and ‘Young’. It’</p>
<h2>From PubFig Dataset</h2>
<ul>
<li>Male</li>