From 70f79c37278d7c47bee29cdf091bde448aae9240 Mon Sep 17 00:00:00 2001
From: adamhrv (PAGE UNDER DEVELOPMENT)
MegaPixels is art and research by Adam Harvey about facial recognition datasets that unravels their histories, futures, geographies, and meanings. Throughout 2019 this site this site will publish research reports, visualizations, raw data, and interactive tools to explore how publicly available facial recognition datasets contribute to a global supply chain of biometric data that powers the global facial recognition industry.
During the last year, hundreds of these facial analysis datasets created "in the wild" have been collected to understand how they contribute to a global supply chain of biometric data that is powering the global facial recognition industry.
+Collectively, facial recognition datasets are now gathered "in the wild".
+MegaPixels is art and research by Adam Harvey about facial recognition datasets that unravels their histories, futures, geographies, and meanings. Throughout 2019 this site this site will publish research reports, visualizations, raw data, and interactive tools to explore how publicly available facial recognition datasets contribute to a global supply chain of biometric data that powers the global facial recognition industry.
During the last year, hundreds of these facial analysis datasets created "in the wild" have been collected to understand how they contribute to a global supply chain of biometric data that is powering the global facial recognition industry.
The MegaPixels website is produced in partnership with Mozilla.
(PAGE UNDER DEVELOPMENT)
Brainwash is a face detection dataset created from the Brainwash Cafe's livecam footage including 11,918 images of "everyday life of a busy downtown cafe 1". The images are used to develop face detection algorithms for the "challenging task of detecting people in crowded scenes" and tracking them.
Before closing in 2017, Brainwash Cafe was a "cafe and laundromat" located in San Francisco's SoMA district. The cafe published a publicy available livestream from the cafe with a view of the cash register, performance stage, and seating area.
Since it's publication by Stanford in 2015, the Brainwash dataset has appeared in several notable research papers. In September 2016 four researchers from the National University of Defense Technology in Changsha, China used the Brainwash dataset for a research study on "people head detection in crowded scenes", concluding that their algorithm "achieves superior head detection performance on the crowded scenes dataset 2". And again in 2017 three researchers at the National University of Defense Technology used Brainwash for a study on object detection noting "the data set used in our experiment is shown in Table 1, which includes one scene of the brainwash dataset 3".
+ This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns + to see yearly totals. Colors are only assigned to the top 10 overall countries. +
+ +To understand how Brainwash Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast
- The data is generated by collecting all citations for all original research papers associated with the dataset. Then the PDFs are then converted to text and the organization names are extracted and geocoded. Because of the automated approach to extracting data, actual use of the dataset can not yet be confirmed. This visualization is provided to help locate and confirm usage and will be updated as data noise is reduced. + Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo.
-- This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns - to see yearly totals. Colors are only assigned to the top 10 overall countries. -
- -Add more analysis here
- Citations were collected from Semantic Scholar, a website which aggregates + The citations used for the geographic visualizations were collected from Semantic Scholar, a website which aggregates and indexes research papers. Metadata was extracted from these papers, including extracting names of institutions automatically from PDFs, and then the addresses were geocoded. Data is not yet manually verified, and reflects anytime the paper was cited. Some papers may only mention the dataset in passing, while others use it as part of their research methodology.
- Add button/link to download CSV + Add [button/link] to download CSV. Add search input field to filter. Expand number of rows to 10. Reduce URL text to show only the domain (ie https://arxiv.org/pdf/123456 --> arxiv.org)
diff --git a/site/public/datasets/cofw/index.html b/site/public/datasets/cofw/index.html index 605a325a..20138c3c 100644 --- a/site/public/datasets/cofw/index.html +++ b/site/public/datasets/cofw/index.html @@ -108,6 +108,8 @@ To increase the number of training images, and since COFW has the exact same laTODO
(PAGE UNDER DEVELOPMENT)
At vero eos et accusamus et iusto odio dignissimos ducimus, qui blanditiis praesentium voluptatum deleniti atque corrupti, quos dolores et quas molestias excepturi sint, obcaecati cupiditate non-provident, similique sunt in culpa, qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio.
diff --git a/site/public/research/01_from_1_to_100_pixels/index.html b/site/public/research/01_from_1_to_100_pixels/index.html index 5254fb40..c91d17ad 100644 --- a/site/public/research/01_from_1_to_100_pixels/index.html +++ b/site/public/research/01_from_1_to_100_pixels/index.html @@ -78,6 +78,7 @@The 27 attributes are:
+| attribute | +representation in file | +label | +
|---|---|---|
| gender | +gender | +male(1), female(2) | +
| hair length | +hair | +short hair(1), long hair(2) | +
| sleeve length | +up | +long sleeve(1), short sleeve(2) | +
| length of lower-body clothing | +down | +long lower body clothing(1), short(2) | +
| type of lower-body clothing | +clothes | +dress(1), pants(2) | +
| wearing hat | +hat | +no(1), yes(2) | +
| carrying backpack | +backpack | +no(1), yes(2) | +
| carrying bag | +bag | +no(1), yes(2) | +
| carrying handbag | +handbag | +no(1), yes(2) | +
| age | +age | +young(1), teenager(2), adult(3), old(4) | +
| 8 color of upper-body clothing | +upblack, upwhite, upred, uppurple, upyellow, upgray, upblue, upgreen | +no(1), yes(2) | +
| 9 color of lower-body clothing | +downblack, downwhite, downpink, downpurple, downyellow, downgray, downblue, downgreen,downbrown | +no(1), yes(2) | +
source: https://github.com/vana77/Market-1501_Attribute/blob/master/README.md
+The 23 attributes are:
+| attribute | +representation in file | +label | +
|---|---|---|
| gender | +gender | +male(1), female(2) | +
| length of upper-body clothing | +top | +short upper body clothing(1), long(2) | +
| wearing boots | +boots | +no(1), yes(2) | +
| wearing hat | +hat | +no(1), yes(2) | +
| carrying backpack | +backpack | +no(1), yes(2) | +
| carrying bag | +bag | +no(1), yes(2) | +
| carrying handbag | +handbag | +no(1), yes(2) | +
| color of shoes | +shoes | +dark(1), light(2) | +
| 8 color of upper-body clothing | +upblack, upwhite, upred, uppurple, upgray, upblue, upgreen, upbrown | +no(1), yes(2) | +
| 7 color of lower-body clothing | +downblack, downwhite, downred, downgray, downblue, downgreen, downbrown | +no(1), yes(2) | +
source: https://github.com/vana77/DukeMTMC-attribute/blob/master/README.md
+The joints and other keypoints (eyes, ears, nose, shoulders, elbows, wrists, hips, knees and ankles) +The 3D pose inferred from the keypoints. +Visibility boolean for each keypoint +Region annotations (upper clothes, lower clothes, dress, socks, shoes, hands, gloves, neck, face, hair, hat, sunglasses, bag, occluder) +Body type (male, female or child)
+source: https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/shape/h3d/