From 2e73ee5f56d02ee1dc3dbf384d71081c714a0491 Mon Sep 17 00:00:00 2001 From: adamhrv Date: Sun, 31 Mar 2019 17:45:33 +0200 Subject: add tmp msceleb --- .../pages/datasets/msceleb/assets/background.jpg | Bin 0 -> 422970 bytes .../pages/datasets/msceleb/assets/index.jpg | Bin 0 -> 39839 bytes site/content/pages/datasets/msceleb/index.md | 56 +++++++++++++++++++++ 3 files changed, 56 insertions(+) create mode 100644 site/content/pages/datasets/msceleb/assets/background.jpg create mode 100644 site/content/pages/datasets/msceleb/assets/index.jpg create mode 100644 site/content/pages/datasets/msceleb/index.md (limited to 'site/content/pages/datasets') diff --git a/site/content/pages/datasets/msceleb/assets/background.jpg b/site/content/pages/datasets/msceleb/assets/background.jpg new file mode 100644 index 00000000..c1cd486e Binary files /dev/null and b/site/content/pages/datasets/msceleb/assets/background.jpg differ diff --git a/site/content/pages/datasets/msceleb/assets/index.jpg b/site/content/pages/datasets/msceleb/assets/index.jpg new file mode 100644 index 00000000..fb3a934a Binary files /dev/null and b/site/content/pages/datasets/msceleb/assets/index.jpg differ diff --git a/site/content/pages/datasets/msceleb/index.md b/site/content/pages/datasets/msceleb/index.md new file mode 100644 index 00000000..eb084eaa --- /dev/null +++ b/site/content/pages/datasets/msceleb/index.md @@ -0,0 +1,56 @@ +------------ + +status: published +title: MS Celeb +desc: MS Celeb is a dataset of web images used for training and evaluating face recognition algorithms +subdesc: The MS Celeb dataset includes over 10,000,000 images and 93,000 identities of semi-public figures collected using the Bing search engine +slug: msceleb +cssclass: dataset +image: assets/background.jpg +year: 2015 +published: 2019-2-23 +updated: 2019-2-23 +authors: Adam Harvey + +------------ + +### sidebar + ++ Published: TBD ++ Images: TBD ++ Faces: TBD ++ Created by: TBD + + +## Microsoft Celeb Dataset (MS Celeb) + +(PAGE UNDER DEVELOPMENT) + +At vero eos et accusamus et iusto odio dignissimos ducimus, qui blanditiis praesentium voluptatum deleniti atque corrupti, quos dolores et quas molestias excepturi sint, obcaecati cupiditate non-provident, similique sunt in culpa, qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. + +Nam libero tempore, cum soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptates repudiandae sint et molestiae non-recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat + +{% include 'chart.html' %} + +{% include 'piechart.html' %} + +{% include 'map.html' %} + +Add more analysis here + + +{% include 'supplementary_header.html' %} + +{% include 'citations.html' %} + + +### Additional Information + +- The dataset author spoke about his research at the CVPR conference in 2016 + + +### Footnotes + +[^readme]: "readme.txt" https://exhibits.stanford.edu/data/catalog/sx925dc9385. +[^localized_region_context]: Li, Y. and Dou, Y. and Liu, X. and Li, T. Localized Region Context and Object Feature Fusion for People Head Detection. ICIP16 Proceedings. 2016. Pages 594-598. +[^replacement_algorithm]: Zhao. X, Wang Y, Dou, Y. A Replacement Algorithm of Non-Maximum Suppression Base on Graph Clustering. \ No newline at end of file -- cgit v1.2.3-70-g09d2 From 1369a597f2f564bf305e0918f91e2a96a4fded82 Mon Sep 17 00:00:00 2001 From: adamhrv Date: Mon, 1 Apr 2019 12:06:14 +0200 Subject: brainwash, example sidebar --- .../datasets/brainwash/assets/00818000_640x480.jpg | Bin 33112 -> 0 bytes .../datasets/brainwash/assets/background_540.jpg | Bin 83594 -> 0 bytes .../datasets/brainwash/assets/background_600.jpg | Bin 86425 -> 0 bytes .../brainwash/assets/brainwash_mean_overlay.jpg | Bin 0 -> 150399 bytes .../brainwash/assets/brainwash_mean_overlay_wm.jpg | Bin 0 -> 151713 bytes site/content/pages/datasets/brainwash/index.md | 41 ++++++++++++++++----- 6 files changed, 31 insertions(+), 10 deletions(-) delete mode 100644 site/content/pages/datasets/brainwash/assets/00818000_640x480.jpg delete mode 100644 site/content/pages/datasets/brainwash/assets/background_540.jpg delete mode 100755 site/content/pages/datasets/brainwash/assets/background_600.jpg create mode 100755 site/content/pages/datasets/brainwash/assets/brainwash_mean_overlay.jpg create mode 100755 site/content/pages/datasets/brainwash/assets/brainwash_mean_overlay_wm.jpg (limited to 'site/content/pages/datasets') diff --git a/site/content/pages/datasets/brainwash/assets/00818000_640x480.jpg b/site/content/pages/datasets/brainwash/assets/00818000_640x480.jpg deleted file mode 100644 index 30c0fcb1..00000000 Binary files a/site/content/pages/datasets/brainwash/assets/00818000_640x480.jpg and /dev/null differ diff --git a/site/content/pages/datasets/brainwash/assets/background_540.jpg b/site/content/pages/datasets/brainwash/assets/background_540.jpg deleted file mode 100644 index 5c8c0ad4..00000000 Binary files a/site/content/pages/datasets/brainwash/assets/background_540.jpg and /dev/null differ diff --git a/site/content/pages/datasets/brainwash/assets/background_600.jpg b/site/content/pages/datasets/brainwash/assets/background_600.jpg deleted file mode 100755 index 8f2de697..00000000 Binary files a/site/content/pages/datasets/brainwash/assets/background_600.jpg and /dev/null differ diff --git a/site/content/pages/datasets/brainwash/assets/brainwash_mean_overlay.jpg b/site/content/pages/datasets/brainwash/assets/brainwash_mean_overlay.jpg new file mode 100755 index 00000000..2f5917e3 Binary files /dev/null and b/site/content/pages/datasets/brainwash/assets/brainwash_mean_overlay.jpg differ diff --git a/site/content/pages/datasets/brainwash/assets/brainwash_mean_overlay_wm.jpg b/site/content/pages/datasets/brainwash/assets/brainwash_mean_overlay_wm.jpg new file mode 100755 index 00000000..790dbb79 Binary files /dev/null and b/site/content/pages/datasets/brainwash/assets/brainwash_mean_overlay_wm.jpg differ diff --git a/site/content/pages/datasets/brainwash/index.md b/site/content/pages/datasets/brainwash/index.md index 0bf67455..d9bffb39 100644 --- a/site/content/pages/datasets/brainwash/index.md +++ b/site/content/pages/datasets/brainwash/index.md @@ -19,28 +19,24 @@ authors: Adam Harvey + Published: 2015 + Images: 11,918 + Faces: 91,146 -+ Created by: Stanford Department of Computer Science ++ Created by: Stanford University (US)
Max Planck Institute for Informatics (DE) + Funded by: Max Planck Center for Visual Computing and Communication -+ Location: Brainwash Cafe, San Franscisco -+ Purpose: Training face detection ++ Purpose: Face detection + Website: stanford.edu -+ Paper: End-to-End People Detection in Crowded Scenes -+ Explicit Consent: No ## Brainwash Dataset (PAGE UNDER DEVELOPMENT) -*Brainwash* is a face detection dataset created from the Brainwash Cafe's livecam footage including 11,918 images of "everyday life of a busy downtown cafe[^readme]". The images are used to develop face detection algorithms for the "challenging task of detecting people in crowded scenes" and tracking them. +*Brainwash* is a head detection dataset created from San Francisco's Brainwash Cafe livecam footage. It includes 11,918 images of "everyday life of a busy downtown cafe[^readme]". The images are used to train and validate algorithms for detecting people in crowded scenes. -Before closing in 2017, Brainwash Cafe was a "cafe and laundromat" located in San Francisco's SoMA district. The cafe published a publicy available livestream from the cafe with a view of the cash register, performance stage, and seating area. +Before closing in 2017, The Brainwash Cafe was a combination cafe, laundromat, and performance venue located in San Francisco's SoMA district. The images used for Brainwash dataset were captured on 3 days: October 27, November 13, and November 24 in 2014. According the author's reserach paper introducing the dataset, the images were acquired with the help of Angelcam.com [cite orig paper]. -Since it's publication by Stanford in 2015, the Brainwash dataset has appeared in several notable research papers. In September 2016 four researchers from the National University of Defense Technology in Changsha, China used the Brainwash dataset for a research study on "people head detection in crowded scenes", concluding that their algorithm "achieves superior head detection performance on the crowded scenes dataset[^localized_region_context]". And again in 2017 three researchers at the National University of Defense Technology used Brainwash for a study on object detection noting "the data set used in our experiment is shown in Table 1, which includes one scene of the brainwash dataset[^replacement_algorithm]". +Brainwash is not a widely used dataset but since it's publication by Stanford in 2015, it has notably appeared in several research papers from the National University of Defense Technology in Changsha, China. In 2016 and in 2017 researchers there conducted studies on "people head detection in crowded scenes" [^localized_region_context] [^replacement_algorithm]. -![caption: An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains about 12,000 images. License: Open Data Commons Public Domain Dedication (PDDL)](assets/00425000_960.jpg) +![caption: The pixel-averaged image of all Brainwash dataset images is shown with 81,973 head annotations drawn from the Brainwash training partition. (c) Adam Harvey](assets/brainwash_mean_overlay.jpg) -![caption: 49 of the 11,918 images included in the Brainwash dataset. License: Open Data Commons Public Domain Dedication (PDDL)](assets/brainwash_montage.jpg) {% include 'chart.html' %} @@ -55,12 +51,37 @@ Add more analysis here {% include 'citations.html' %} +![caption: An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains about 12,000 images. License: Open Data Commons Public Domain Dedication (PDDL)](assets/00425000_960.jpg) + +![caption: 49 of the 11,918 images included in the Brainwash dataset. License: Open Data Commons Public Domain Dedication (PDDL)](assets/brainwash_montage.jpg) ### Additional Information - The dataset author spoke about his research at the CVPR conference in 2016 +To evaluate the performance of our approach, we collected a +large dataset of images from busy scenes using video footage available from public webcams. In +total, we collect 11917 images with 91146 labeled people. We extract images from video footage at +a fixed interval of 100 seconds to ensure a large variation in images. We allocate 1000 images for +testing and validation, and leave the remaining images for training, making sure that no temporal +overlaps exist between training and test splits. The resulting training set contains 82906 instances. +Test and validation sets contain 4922 and 3318 people instances respectively. Images were labeled +using Amazon Mechanical Turk by a handful of workers pre-selected through their performance on +an example task. We label each person’s head to avoid ambiguity in bounding box locations. The +annotator labels any person she is able to recognize, even if a substantial part of the person is not +visible. Images and annotations will be made available 1 . +Examples of collected images are shown in Fig. 6, and in the video included in the supplemental +material. Images in our dataset include challenges such as people at small scales, strong partial +occlusions, and a large variability in clothing and appearance. + +TODO + +- add bounding boxes to the header image +- remake montage with randomized images, with bboxes +- clean up intro text + + ### Footnotes [^readme]: "readme.txt" https://exhibits.stanford.edu/data/catalog/sx925dc9385. -- cgit v1.2.3-70-g09d2 From 1d261333895cb9305c73d02170e61c5100a39358 Mon Sep 17 00:00:00 2001 From: adamhrv Date: Mon, 1 Apr 2019 12:49:57 +0200 Subject: add dataset size --- site/content/pages/datasets/brainwash/index.md | 38 +++++++------------------- 1 file changed, 10 insertions(+), 28 deletions(-) (limited to 'site/content/pages/datasets') diff --git a/site/content/pages/datasets/brainwash/index.md b/site/content/pages/datasets/brainwash/index.md index d9bffb39..6d90e78f 100644 --- a/site/content/pages/datasets/brainwash/index.md +++ b/site/content/pages/datasets/brainwash/index.md @@ -2,8 +2,8 @@ status: published title: Brainwash -desc: Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco -subdesc: The Brainwash dataset includes 11,918 images of "everyday life of a busy downtown cafe" and is used for training head detection algorithms +desc: Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco in 2014 +subdesc: The Brainwash dataset includes 11,918 images of "everyday life of a busy downtown cafe" and is used for training head detection surveillance algorithms slug: brainwash cssclass: dataset image: assets/background.jpg @@ -21,19 +21,18 @@ authors: Adam Harvey + Faces: 91,146 + Created by: Stanford University (US)
Max Planck Institute for Informatics (DE) + Funded by: Max Planck Center for Visual Computing and Communication -+ Purpose: Face detection ++ Purpose: Head detection ++ Download Size: 4.1GB + Website: stanford.edu ## Brainwash Dataset -(PAGE UNDER DEVELOPMENT) +*Brainwash* is a head detection dataset created from San Francisco's Brainwash Cafe livecam footage. It includes 11,918 images of "everyday life of a busy downtown cafe"[^readme] captured at 100 second intervals throught the entire day. Brainwash dataset was captured during 3 days in 2014: October 27, November 13, and November 24. According the author's reserach paper introducing the dataset, the images were acquired with the help of Angelcam.com [cite orig paper]. -*Brainwash* is a head detection dataset created from San Francisco's Brainwash Cafe livecam footage. It includes 11,918 images of "everyday life of a busy downtown cafe[^readme]". The images are used to train and validate algorithms for detecting people in crowded scenes. +Brainwash is not a widely used dataset but since its publication by Stanford University in 2015, it has notably appeared in several research papers from the National University of Defense Technology in Changsha, China. In 2016 and in 2017 researchers there conducted studies on detecting people's heads in crowded scenes for the purpose of surveillance [^localized_region_context] [^replacement_algorithm]. -Before closing in 2017, The Brainwash Cafe was a combination cafe, laundromat, and performance venue located in San Francisco's SoMA district. The images used for Brainwash dataset were captured on 3 days: October 27, November 13, and November 24 in 2014. According the author's reserach paper introducing the dataset, the images were acquired with the help of Angelcam.com [cite orig paper]. - -Brainwash is not a widely used dataset but since it's publication by Stanford in 2015, it has notably appeared in several research papers from the National University of Defense Technology in Changsha, China. In 2016 and in 2017 researchers there conducted studies on "people head detection in crowded scenes" [^localized_region_context] [^replacement_algorithm]. +If you happen to have been at Brainwash cafe in San Franscisco at any time on October 26, November 13, or November 24 in 2014 you are most likely included in the Brainwash dataset. ![caption: The pixel-averaged image of all Brainwash dataset images is shown with 81,973 head annotations drawn from the Brainwash training partition. (c) Adam Harvey](assets/brainwash_mean_overlay.jpg) @@ -44,42 +43,25 @@ Brainwash is not a widely used dataset but since it's publication by Stanford in {% include 'map.html' %} -Add more analysis here - +{% include 'citations.html' %} {% include 'supplementary_header.html' %} -{% include 'citations.html' %} - ![caption: An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains about 12,000 images. License: Open Data Commons Public Domain Dedication (PDDL)](assets/00425000_960.jpg) ![caption: 49 of the 11,918 images included in the Brainwash dataset. License: Open Data Commons Public Domain Dedication (PDDL)](assets/brainwash_montage.jpg) -### Additional Information +#### Additional Resources - The dataset author spoke about his research at the CVPR conference in 2016 -To evaluate the performance of our approach, we collected a -large dataset of images from busy scenes using video footage available from public webcams. In -total, we collect 11917 images with 91146 labeled people. We extract images from video footage at -a fixed interval of 100 seconds to ensure a large variation in images. We allocate 1000 images for -testing and validation, and leave the remaining images for training, making sure that no temporal -overlaps exist between training and test splits. The resulting training set contains 82906 instances. -Test and validation sets contain 4922 and 3318 people instances respectively. Images were labeled -using Amazon Mechanical Turk by a handful of workers pre-selected through their performance on -an example task. We label each person’s head to avoid ambiguity in bounding box locations. The -annotator labels any person she is able to recognize, even if a substantial part of the person is not -visible. Images and annotations will be made available 1 . -Examples of collected images are shown in Fig. 6, and in the video included in the supplemental -material. Images in our dataset include challenges such as people at small scales, strong partial -occlusions, and a large variability in clothing and appearance. - TODO - add bounding boxes to the header image - remake montage with randomized images, with bboxes - clean up intro text +- verify quote citations ### Footnotes -- cgit v1.2.3-70-g09d2 From 7aaa8b8cd68d3eb09c68da2b0a64cbe635fdb8d5 Mon Sep 17 00:00:00 2001 From: adamhrv Date: Mon, 1 Apr 2019 12:51:52 +0200 Subject: updating datasets --- .../assets/duke_mtmc_cam5_average_comp.jpg | Bin 0 -> 195172 bytes site/content/pages/datasets/duke_mtmc/index.md | 28 ++++++++++++------ .../datasets/uccs/assets/uccs_bboxes_clr_fill.jpg | Bin 146050 -> 0 bytes .../datasets/uccs/assets/uccs_bboxes_grayscale.jpg | Bin 299802 -> 0 bytes .../datasets/uccs/assets/uccs_mean_bboxes_comp.jpg | Bin 0 -> 253215 bytes site/content/pages/datasets/uccs/index.md | 32 +++++++++++++++------ 6 files changed, 43 insertions(+), 17 deletions(-) create mode 100755 site/content/pages/datasets/duke_mtmc/assets/duke_mtmc_cam5_average_comp.jpg delete mode 100644 site/content/pages/datasets/uccs/assets/uccs_bboxes_clr_fill.jpg delete mode 100644 site/content/pages/datasets/uccs/assets/uccs_bboxes_grayscale.jpg create mode 100644 site/content/pages/datasets/uccs/assets/uccs_mean_bboxes_comp.jpg (limited to 'site/content/pages/datasets') diff --git a/site/content/pages/datasets/duke_mtmc/assets/duke_mtmc_cam5_average_comp.jpg b/site/content/pages/datasets/duke_mtmc/assets/duke_mtmc_cam5_average_comp.jpg new file mode 100755 index 00000000..3cd64df1 Binary files /dev/null and b/site/content/pages/datasets/duke_mtmc/assets/duke_mtmc_cam5_average_comp.jpg differ diff --git a/site/content/pages/datasets/duke_mtmc/index.md b/site/content/pages/datasets/duke_mtmc/index.md index de1fa14c..c626ef4e 100644 --- a/site/content/pages/datasets/duke_mtmc/index.md +++ b/site/content/pages/datasets/duke_mtmc/index.md @@ -2,8 +2,8 @@ status: published title: Duke Multi-Target, Multi-Camera Tracking -desc: Duke MTMC is a dataset of CCTV footage of students at Duke University -subdesc: Duke MTMC contains over 2 million video frames and 2,000 unique identities collected from 8 cameras at Duke University campus in March 2014 +desc: Duke MTMC is a dataset of surveillance camera footage of students on Duke University campus +subdesc: Duke MTMC contains over 2 million video frames and 2,000 unique identities collected from 8 HD cameras at Duke University campus in March 2014 slug: duke_mtmc cssclass: dataset image: assets/background.jpg @@ -15,17 +15,27 @@ authors: Adam Harvey ### sidebar -+ Collected: March 19, 2014 -+ Cameras: 8 -+ Video Frames: 2,000,000 -+ Identities: Over 2,000 -+ Used for: Person re-identification,
face recognition -+ Sector: Academic ++ Created: 2014 ++ Identities: Over 2,700 ++ Used for: Face recognition, person re-identification ++ Created by: Computer Science Department, Duke University, Durham, US + Website: duke.edu ## Duke Multi-Target, Multi-Camera Tracking Dataset (Duke MTMC) -(PAGE UNDER DEVELOPMENT) +[ PAGE UNDER DEVELOPMENT ] + +Duke MTMC is a dataset of video recorded on Duke University campus during for the purpose of training, evaluating, and improving *multi-target multi-camera tracking*. The videos were recorded during February and March 2014 and cinclude + +Includes a total of 888.8 minutes of video (ind. verified) + +"We make available a new data set that has more than 2 million frames and more than 2,700 identities. It consists of 8×85 minutes of 1080p video recorded at 60 frames per second from 8 static cameras deployed on the Duke University campus during periods between lectures, when pedestrian traffic is heavy." + +The dataset includes approximately 2,000 annotated identities appearing in 85 hours of video from 8 cameras located throughout Duke University's campus. + +![caption: Duke MTMC pixel-averaged image of camera #5 is shown with the bounding boxes for each student drawn in white. (c) Adam Harvey](assets/duke_mtmc_cam5_average_comp.jpg) + +According to the dataset authors, {% include 'map.html' %} diff --git a/site/content/pages/datasets/uccs/assets/uccs_bboxes_clr_fill.jpg b/site/content/pages/datasets/uccs/assets/uccs_bboxes_clr_fill.jpg deleted file mode 100644 index c8002bb9..00000000 Binary files a/site/content/pages/datasets/uccs/assets/uccs_bboxes_clr_fill.jpg and /dev/null differ diff --git a/site/content/pages/datasets/uccs/assets/uccs_bboxes_grayscale.jpg b/site/content/pages/datasets/uccs/assets/uccs_bboxes_grayscale.jpg deleted file mode 100644 index 6e2833dd..00000000 Binary files a/site/content/pages/datasets/uccs/assets/uccs_bboxes_grayscale.jpg and /dev/null differ diff --git a/site/content/pages/datasets/uccs/assets/uccs_mean_bboxes_comp.jpg b/site/content/pages/datasets/uccs/assets/uccs_mean_bboxes_comp.jpg new file mode 100644 index 00000000..18f4c5ec Binary files /dev/null and b/site/content/pages/datasets/uccs/assets/uccs_mean_bboxes_comp.jpg differ diff --git a/site/content/pages/datasets/uccs/index.md b/site/content/pages/datasets/uccs/index.md index 092638c0..b3d16c2e 100644 --- a/site/content/pages/datasets/uccs/index.md +++ b/site/content/pages/datasets/uccs/index.md @@ -2,8 +2,8 @@ status: published title: Unconstrained College Students -desc: Unconstrained College Students (UCCS) is a dataset of images ... -subdesc: The UCCS dataset includes ... +desc: Unconstrained College Students (UCCS) is a dataset of long-range surveillance photos of students taken without their knowledge +subdesc: The UCCS dataset includes 16,149 images and 1,732 identities, is used for face recognition and face detection, and funded was several US defense agences slug: uccs cssclass: dataset image: assets/background.jpg @@ -15,16 +15,22 @@ authors: Adam Harvey ### sidebar -+ Collected: TBD -+ Published: TBD -+ Images: TBD -+ Faces: TBD ++ Published: 2018 ++ Images: 16,149 ++ Identities: 1,732 ++ Used for: Face recognition, face detection ++ Created by: Unviversity of Colorado Colorado Springs (US) ++ Funded by: ODNI, IARPA, ONR MURI, Amry SBIR, SOCOM SBIR ++ Website: vast.uccs.edu ## Unconstrained College Students ... (PAGE UNDER DEVELOPMENT) +![caption: The pixel-average of all Uconstrained College Students images is shown with all 51,838 face annotations. (c) Adam Harvey](assets/uccs_mean_bboxes_comp.jpg) + + {% include 'map.html' %} {% include 'chart.html' %} @@ -36,7 +42,6 @@ authors: Adam Harvey {% include 'citations.html' %} -![Bounding box visualization](assets/uccs_bboxes_grayscale.jpg) ### Research Notes @@ -55,4 +60,15 @@ The more recent UCCS version of the dataset received funding from [^funding_uccs [^funding_sb]: Sapkota, Archana and Boult, Terrance. "Large Scale Unconstrained Open Set Face Database." 2013. -[^funding_uccs]: Günther, M. et. al. "Unconstrained Face Detection and Open-Set Face Recognition Challenge," 2018. Arxiv 1708.02337v3. \ No newline at end of file +[^funding_uccs]: Günther, M. et. al. "Unconstrained Face Detection and Open-Set Face Recognition Challenge," 2018. Arxiv 1708.02337v3. + + +" In most face detection/recognition datasets, the majority of images are “posed”, i.e. the subjects know they are being photographed, and/or the images are selected for publication in public media. Hence, blurry, occluded and badly illuminated images are generally uncommon in these datasets. In addition, most of these challenges are close-set, i.e. the list of subjects in the gallery is the same as the one used for testing. + +This challenge explores more unconstrained data, by introducing the new UnConstrained College Students (UCCS) dataset, where subjects are photographed using a long-range high-resolution surveillance camera without their knowledge. Faces inside these images are of various poses, and varied levels of blurriness and occlusion. The challenge also creates an open set recognition problem, where unknown people will be seen during testing and must be rejected. + +With this challenge, we hope to foster face detection and recognition research towards surveillance applications that are becoming more popular and more required nowadays, and where no automatic recognition algorithm has proven to be useful yet. + +UnConstrained College Students (UCCS) Dataset + +The UCCS dataset was collected over several months using Canon 7D camera fitted with Sigma 800mm F5.6 EX APO DG HSM lens, taking images at one frame per second, during times when many students were walking on the sidewalk. " \ No newline at end of file -- cgit v1.2.3-70-g09d2 From 4a11e59f991c8ca12ef4ca20a3b01741f311a0e4 Mon Sep 17 00:00:00 2001 From: adamhrv Date: Mon, 1 Apr 2019 13:10:52 +0200 Subject: updates, broke smth --- site/assets/css/css.css | 4 +- site/content/pages/datasets/index.md | 2 +- site/content/pages/datasets/uccs/index.md | 3 +- .../research/01_from_1_to_100_pixels/index.md | 52 ++++++++++++++++++++++ .../research/02_what_computers_can_see/index.md | 25 ++++++++++- site/includes/map.html | 2 +- 6 files changed, 81 insertions(+), 7 deletions(-) (limited to 'site/content/pages/datasets') diff --git a/site/assets/css/css.css b/site/assets/css/css.css index cd16409a..0ee8a4f3 100644 --- a/site/assets/css/css.css +++ b/site/assets/css/css.css @@ -884,7 +884,7 @@ ul.map-legend li.source:before { font-family: Roboto, sans-serif; font-weight: 400; background: #202020; - padding: 15px; + padding: 20px; margin: 10px; } .columns .column:first-of-type { @@ -937,7 +937,7 @@ ul.map-legend li.source:before { margin:0 0 0 40px; } .content-about .team-member p{ - font-size:14px; + font-size:16px; } .content-about .team-member img{ margin:0; diff --git a/site/content/pages/datasets/index.md b/site/content/pages/datasets/index.md index 2e943fbe..c0373d60 100644 --- a/site/content/pages/datasets/index.md +++ b/site/content/pages/datasets/index.md @@ -13,4 +13,4 @@ sync: false # Facial Recognition Datasets -### Survey +Explore publicly available facial recognition datasets. More datasets will be added throughout 2019. diff --git a/site/content/pages/datasets/uccs/index.md b/site/content/pages/datasets/uccs/index.md index b3d16c2e..e0925e07 100644 --- a/site/content/pages/datasets/uccs/index.md +++ b/site/content/pages/datasets/uccs/index.md @@ -3,8 +3,7 @@ status: published title: Unconstrained College Students desc: Unconstrained College Students (UCCS) is a dataset of long-range surveillance photos of students taken without their knowledge -subdesc: The UCCS dataset includes 16,149 images and 1,732 identities, is used for face recognition and face detection, and funded was several US defense agences -slug: uccs +subdesc: The UCCS dataset includes 16,149 images and 1,732 identities of students at University of Colorado Colorado Springs campus and is used for face recognition and face detection cssclass: dataset image: assets/background.jpg published: 2019-2-23 diff --git a/site/content/pages/research/01_from_1_to_100_pixels/index.md b/site/content/pages/research/01_from_1_to_100_pixels/index.md index a7b863a9..b219dffb 100644 --- a/site/content/pages/research/01_from_1_to_100_pixels/index.md +++ b/site/content/pages/research/01_from_1_to_100_pixels/index.md @@ -56,3 +56,55 @@ Ideas: - "Note that we only keep the images with a minimal side length of 80 pixels." and "a face will be labeled as “Ignore” if it is very difficult to be detected due to blurring, severe deformation and unrecognizable eyes, or the side length of its bounding box is less than 32 pixels." Ge_Detecting_Masked_Faces_CVPR_2017_paper.pdf - IBM DiF: "Faces with region size less than 50x50 or inter-ocular distance of less than 30 pixels were discarded. Faces with non-frontal pose, or anything beyond being slightly tilted to the left or the right, were also discarded." + + + + +As the resolution +formatted as rectangular databases of 16 bit RGB-tuples or 8 bit grayscale values + + +To consider how visual privacy applies to real world surveillance situations, the first + +A single 8-bit grayscale pixel with 256 values is enough to represent the entire alphabet `a-Z0-9` with room to spare. + +A 2x2 pixels contains + +Using no more than a 42 pixel (6x7 image) face image researchers [cite] were able to correctly distinguish between a group of 50 people. Yet + +The likely outcome of face recognition research is that more data is needed to improve. Indeed, resolution is the determining factor for all biometric systems, both as training data to increase + +Pixels, typically considered the buiding blocks of images and vidoes, can also be plotted as a graph of sensor values corresponding to the intensity of RGB-calibrated sensors. + + +Wi-Fi and cameras presents elevated risks for transmitting videos and image documentation from conflict zones, high-risk situations, or even sharing on social media. How can new developments in computer vision also be used in reverse, as a counter-forensic tool, to minimize an individual's privacy risk? + +As the global Internet becomes increasingly effecient at turning the Internet into a giant dataset for machine learning, forensics, and data analysing, it would be prudent to also consider tools for decreasing the resolution. The Visual Defense module is just that. What are new ways to minimize the adverse effects of surveillance by dulling the blade. For example, a researcher paper showed that by decreasing a face size to 12x16 it was possible to do 98% accuracy with 50 people. This is clearly an example of + +This research module, tentatively called Visual Defense Tools, aims to explore the + + +### Prior Research + +- MPI visual privacy advisor +- NIST: super resolution +- YouTube blur tool +- WITNESS: blur tool +- Pixellated text +- CV Dazzle +- Bellingcat guide to geolocation +- Peng! magic passport + +### Notes + +- In China, out of the approximately 200 million surveillance cameras only about 15% have enough resolution for face recognition. +- In Apple's FaceID security guide, the probability of someone else's face unlocking your phone is 1 out of 1,000,000. +- In England, the Metropolitan Police reported a false-positive match rate of 98% when attempting to use face recognition to locate wanted criminals. +- In a face recognition trial at Berlin's Sudkreuz station, the false-match rate was 20%. + + +What all 3 examples illustrate is that face recognition is anything but absolute. In a 2017 talk, Jason Matheny the former directory of IARPA, admitted the face recognition is so brittle it can be subverted by using a magic marker and drawing "a few dots on your forehead". In fact face recognition is a misleading term. Face recognition is search engine for faces that can only ever show you the mos likely match. This presents real a real threat to privacy and lends + + +Globally, iPhone users unwittingly agree to 1/1,000,000 probably +relying on FaceID and TouchID to protect their information agree to a \ No newline at end of file diff --git a/site/content/pages/research/02_what_computers_can_see/index.md b/site/content/pages/research/02_what_computers_can_see/index.md index ab4c7884..51621f46 100644 --- a/site/content/pages/research/02_what_computers_can_see/index.md +++ b/site/content/pages/research/02_what_computers_can_see/index.md @@ -100,6 +100,7 @@ A list of 100 things computer vision can see, eg: - Wearing Necktie - Wearing Necklace +for i in {1..9};do wget http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_0$i.MP4;done;for i in {10..20}; do wget http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_$i.MP4;done ## From Market 1501 @@ -149,4 +150,26 @@ Visibility boolean for each keypoint Region annotations (upper clothes, lower clothes, dress, socks, shoes, hands, gloves, neck, face, hair, hat, sunglasses, bag, occluder) Body type (male, female or child) -source: https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/shape/h3d/ \ No newline at end of file +source: https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/shape/h3d/ + +## From Leeds Sports Pose + +=INDEX(A2:A9,MATCH(datasets!D1,B2:B9,0)) +=VLOOKUP(A2, datasets!A:J, 7, FALSE) + +Right ankle +Right knee +Right hip +Left hip +Left knee +Left ankle +Right wrist +Right elbow +Right shoulder +Left shoulder +Left elbow +Left wrist +Neck +Head top + +source: http://web.archive.org/web/20170915023005/sam.johnson.io/research/lsp.html \ No newline at end of file diff --git a/site/includes/map.html b/site/includes/map.html index 31d577cd..30c248a6 100644 --- a/site/includes/map.html +++ b/site/includes/map.html @@ -12,7 +12,7 @@ -->

- To help understand how {{ metadata.meta.dataset.name_display }} has been used around the world for commercial, military and academic research; publicly available research citations {{ metadata.meta.dataset.name_display }} are collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. + To help understand how {{ metadata.meta.dataset.name_display }} has been used around the world for commercial, military and academic research; publicly available research citing {{ metadata.meta.dataset.name_full} is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location.

-- cgit v1.2.3-70-g09d2 From 2d8b7dd6ea6ccb0293c8839898cf7a1246dc0eb4 Mon Sep 17 00:00:00 2001 From: Jules Laplace Date: Mon, 1 Apr 2019 14:25:06 +0200 Subject: rebuild --- megapixels/app/site/builder.py | 2 +- megapixels/app/site/loader.py | 3 ++ site/content/pages/datasets/uccs/index.md | 1 + site/includes/map.html | 2 +- site/public/datasets/index.html | 2 +- .../research/01_from_1_to_100_pixels/index.html | 32 ++++++++++++++++++++++ .../research/02_what_computers_can_see/index.html | 19 +++++++++++++ site/public/research/index.html | 18 ++++++++++-- 8 files changed, 74 insertions(+), 5 deletions(-) (limited to 'site/content/pages/datasets') diff --git a/megapixels/app/site/builder.py b/megapixels/app/site/builder.py index 603d4788..55a85b0f 100644 --- a/megapixels/app/site/builder.py +++ b/megapixels/app/site/builder.py @@ -57,7 +57,7 @@ def build_page(fn, research_posts, datasets): s3.sync_directory(dirname, s3_dir, metadata) content = parser.parse_markdown(metadata, sections, s3_path, skip_h1=skip_h1) - + html = template.render( metadata=metadata, content=content, diff --git a/megapixels/app/site/loader.py b/megapixels/app/site/loader.py index a544333b..d150942c 100644 --- a/megapixels/app/site/loader.py +++ b/megapixels/app/site/loader.py @@ -85,6 +85,9 @@ def parse_metadata(fn, sections): metadata['meta'] = load_json(dataset_path) if not metadata['meta']: print("Bad metadata? {}".format(dataset_path)) + else: + print(metadata['slug']) + print("{} does not exist!".format(dataset_path)) if 'meta' not in metadata or not metadata['meta']: # dude metadata['meta'] = {} diff --git a/site/content/pages/datasets/uccs/index.md b/site/content/pages/datasets/uccs/index.md index e0925e07..1e3ec097 100644 --- a/site/content/pages/datasets/uccs/index.md +++ b/site/content/pages/datasets/uccs/index.md @@ -6,6 +6,7 @@ desc: Unconstrained College Students (UCCS) is subdesc: The UCCS dataset includes 16,149 images and 1,732 identities of students at University of Colorado Colorado Springs campus and is used for face recognition and face detection cssclass: dataset image: assets/background.jpg +slug: uccs published: 2019-2-23 updated: 2019-2-23 authors: Adam Harvey diff --git a/site/includes/map.html b/site/includes/map.html index 30c248a6..7511d4c7 100644 --- a/site/includes/map.html +++ b/site/includes/map.html @@ -12,7 +12,7 @@ -->

- To help understand how {{ metadata.meta.dataset.name_display }} has been used around the world for commercial, military and academic research; publicly available research citing {{ metadata.meta.dataset.name_full} is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. + To help understand how {{ metadata.meta.dataset.name_display }} has been used around the world for commercial, military and academic research; publicly available research citing {{ metadata.meta.dataset.name_full }} is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location.

diff --git a/site/public/datasets/index.html b/site/public/datasets/index.html index 03b38f8a..1d2630e1 100644 --- a/site/public/datasets/index.html +++ b/site/public/datasets/index.html @@ -28,7 +28,7 @@

Facial Recognition Datasets

-

Survey

+

Explore publicly available facial recognition datasets. More datasets will be added throughout 2019.

diff --git a/site/public/research/01_from_1_to_100_pixels/index.html b/site/public/research/01_from_1_to_100_pixels/index.html index c91d17ad..37fc367f 100644 --- a/site/public/research/01_from_1_to_100_pixels/index.html +++ b/site/public/research/01_from_1_to_100_pixels/index.html @@ -80,6 +80,38 @@
  • "Note that we only keep the images with a minimal side length of 80 pixels." and "a face will be labeled as “Ignore” if it is very difficult to be detected due to blurring, severe deformation and unrecognizable eyes, or the side length of its bounding box is less than 32 pixels." Ge_Detecting_Masked_Faces_CVPR_2017_paper.pdf
  • IBM DiF: "Faces with region size less than 50x50 or inter-ocular distance of less than 30 pixels were discarded. Faces with non-frontal pose, or anything beyond being slightly tilted to the left or the right, were also discarded."
  • +

    As the resolution +formatted as rectangular databases of 16 bit RGB-tuples or 8 bit grayscale values

    +

    To consider how visual privacy applies to real world surveillance situations, the first

    +

    A single 8-bit grayscale pixel with 256 values is enough to represent the entire alphabet a-Z0-9 with room to spare.

    +

    A 2x2 pixels contains

    +

    Using no more than a 42 pixel (6x7 image) face image researchers [cite] were able to correctly distinguish between a group of 50 people. Yet

    +

    The likely outcome of face recognition research is that more data is needed to improve. Indeed, resolution is the determining factor for all biometric systems, both as training data to increase

    +

    Pixels, typically considered the buiding blocks of images and vidoes, can also be plotted as a graph of sensor values corresponding to the intensity of RGB-calibrated sensors.

    +

    Wi-Fi and cameras presents elevated risks for transmitting videos and image documentation from conflict zones, high-risk situations, or even sharing on social media. How can new developments in computer vision also be used in reverse, as a counter-forensic tool, to minimize an individual's privacy risk?

    +

    As the global Internet becomes increasingly effecient at turning the Internet into a giant dataset for machine learning, forensics, and data analysing, it would be prudent to also consider tools for decreasing the resolution. The Visual Defense module is just that. What are new ways to minimize the adverse effects of surveillance by dulling the blade. For example, a researcher paper showed that by decreasing a face size to 12x16 it was possible to do 98% accuracy with 50 people. This is clearly an example of

    +

    This research module, tentatively called Visual Defense Tools, aims to explore the

    +

    Prior Research

    +
      +
    • MPI visual privacy advisor
    • +
    • NIST: super resolution
    • +
    • YouTube blur tool
    • +
    • WITNESS: blur tool
    • +
    • Pixellated text
    • +
    • CV Dazzle
    • +
    • Bellingcat guide to geolocation
    • +
    • Peng! magic passport
    • +
    +

    Notes

    +
      +
    • In China, out of the approximately 200 million surveillance cameras only about 15% have enough resolution for face recognition.
    • +
    • In Apple's FaceID security guide, the probability of someone else's face unlocking your phone is 1 out of 1,000,000.
    • +
    • In England, the Metropolitan Police reported a false-positive match rate of 98% when attempting to use face recognition to locate wanted criminals.
    • +
    • In a face recognition trial at Berlin's Sudkreuz station, the false-match rate was 20%.
    • +
    +

    What all 3 examples illustrate is that face recognition is anything but absolute. In a 2017 talk, Jason Matheny the former directory of IARPA, admitted the face recognition is so brittle it can be subverted by using a magic marker and drawing "a few dots on your forehead". In fact face recognition is a misleading term. Face recognition is search engine for faces that can only ever show you the mos likely match. This presents real a real threat to privacy and lends

    +

    Globally, iPhone users unwittingly agree to 1/1,000,000 probably +relying on FaceID and TouchID to protect their information agree to a


    1. NIST 906932. Performance Assessment of Face Recognition Using Super-Resolution. Shuowen Hu, Robert Maschal, S. Susan Young, Tsai Hong Hong, Jonathon P. Phillips

    2. diff --git a/site/public/research/02_what_computers_can_see/index.html b/site/public/research/02_what_computers_can_see/index.html index 9389bf84..0fce1373 100644 --- a/site/public/research/02_what_computers_can_see/index.html +++ b/site/public/research/02_what_computers_can_see/index.html @@ -126,6 +126,7 @@
    3. Wearing Necktie
    4. Wearing Necklace
    5. +

      for i in {1..9};do wget http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_0$i.MP4;done;for i in {10..20}; do wget http://visiond1.cs.umbc.edu/webpage/codedata/ADLdataset/ADL_videos/P_$i.MP4;done

      From Market 1501

      The 27 attributes are:

      @@ -269,6 +270,24 @@ Visibility boolean for each keypoint Region annotations (upper clothes, lower clothes, dress, socks, shoes, hands, gloves, neck, face, hair, hat, sunglasses, bag, occluder) Body type (male, female or child)

      source: https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/shape/h3d/

      +

      From Leeds Sports Pose

      +

      =INDEX(A2:A9,MATCH(datasets!D1,B2:B9,0)) +=VLOOKUP(A2, datasets!A:J, 7, FALSE)

      +

      Right ankle +Right knee +Right hip +Left hip +Left knee +Left ankle +Right wrist +Right elbow +Right shoulder +Left shoulder +Left elbow +Left wrist +Neck +Head top

      +

      source: http://web.archive.org/web/20170915023005/sam.johnson.io/research/lsp.html

      diff --git a/site/public/research/index.html b/site/public/research/index.html index 303732f8..0ef57043 100644 --- a/site/public/research/index.html +++ b/site/public/research/index.html @@ -26,8 +26,22 @@
      -

      Research Blog

      -
      +
      +

      Research

      +
      +
      +
      Posted
      +
      2018-12-15
      +
      +
      +
      By
      +
      Adam Harvey
      +
      + +
      +
      + +
      -- cgit v1.2.3-70-g09d2 From 79ca5c75243e4d94a7924d1bda8666123f398d9c Mon Sep 17 00:00:00 2001 From: adamhrv Date: Mon, 1 Apr 2019 19:50:44 +0200 Subject: . --- site/content/pages/datasets/uccs/index.md | 70 ++++++++++++++++++++++++------- site/includes/map.html | 2 +- 2 files changed, 55 insertions(+), 17 deletions(-) (limited to 'site/content/pages/datasets') diff --git a/site/content/pages/datasets/uccs/index.md b/site/content/pages/datasets/uccs/index.md index e0925e07..80ce0836 100644 --- a/site/content/pages/datasets/uccs/index.md +++ b/site/content/pages/datasets/uccs/index.md @@ -2,6 +2,7 @@ status: published title: Unconstrained College Students +slug: uccs desc: Unconstrained College Students (UCCS) is a dataset of long-range surveillance photos of students taken without their knowledge subdesc: The UCCS dataset includes 16,149 images and 1,732 identities of students at University of Colorado Colorado Springs campus and is used for face recognition and face detection cssclass: dataset @@ -27,6 +28,50 @@ authors: Adam Harvey (PAGE UNDER DEVELOPMENT) +Unconstrained College Students (UCCS) is a dataset of long-range surveillance photos captured at University of Colorado Colorado Springs. According to the authors of two papers associated with the dataset, subjects were "photographed using a long-range high-resolution surveillance camera without their knowledge" [^funding_sb]. The images were captured using a Canon 7D digital camera fitted with a Sigma 800mm telephoto lens pointed out the window of an office. + +The UCCS dataset was funded by ODNI (Office of Director of National Intelligence), IARPA (Intelligence Advance Research Projects Activity), ONR MURI Office of Naval Research and The Department of Defense Multidisciplinary University Research Initiative, Army SBIR (Small Business Innovation Research), SOCOM SBIR (Special Operations Command and Small Business Innovation Research), and the National Science Foundation. + +The images in UCCS include students walking between classes on campus over 19 days in 2012 - 2013. The dates include: + +| Year | Month | Day | Date | Time Range | Photos | +| --- | --- | --- | --- | --- | --- | +| 2012 | Februay | --- | 23 | - | 132 | +| 2012 | March | --- | 6 | - | - | +| 2012 | March | --- | 8 | - | - | +| 2012 | March | --- | 13 | - | - | +| 2012 | Februay | --- | 23 | - | 132 | +| 2012 | March | --- | 6 | - | - | +| 2012 | March | --- | 8 | - | - | +| 2012 | March | --- | 13 | - | - | +| 2012 | Februay | --- | 23 | - | 132 | +| 2012 | March | --- | 6 | - | - | +| 2012 | March | --- | 8 | - | - | +| 2012 | March | --- | 13 | - | - | +| 2012 | Februay | --- | 23 | - | 132 | +| 2012 | March | --- | 6 | - | - | +| 2012 | March | --- | 8 | - | - | +| 2012 | March | --- | 13 | - | - | +| 2012 | Februay | --- | 23 | - | 132 | +| 2012 | March | --- | 6 | - | - | +| 2012 | March | --- | 8 | - | - | + + +2012-03-20 +2012-03-22 +2012-04-03 +2012-04-12 +2012-04-17 +2012-04-24 +2012-04-25 +2012-04-26 +2013-01-28 +2013-01-29 +2013-02-13 +2013-02-19 +2013-02-20 +2013-02-26 + ![caption: The pixel-average of all Uconstrained College Students images is shown with all 51,838 face annotations. (c) Adam Harvey](assets/uccs_mean_bboxes_comp.jpg) @@ -36,13 +81,9 @@ authors: Adam Harvey {% include 'piechart.html' %} -{% include 'supplementary_header.html' %} - {% include 'citations.html' %} - - -### Research Notes +{% include 'supplementary_header.html' %} The original Sapkota and Boult dataset, from which UCCS is derived, received funding from[^funding_sb]: @@ -57,17 +98,14 @@ The more recent UCCS version of the dataset received funding from [^funding_uccs - IARPA (Intelligence Advance Research Projects Activity) R&D contract 2014-14071600012 +### TODO -[^funding_sb]: Sapkota, Archana and Boult, Terrance. "Large Scale Unconstrained Open Set Face Database." 2013. -[^funding_uccs]: Günther, M. et. al. "Unconstrained Face Detection and Open-Set Face Recognition Challenge," 2018. Arxiv 1708.02337v3. - - -" In most face detection/recognition datasets, the majority of images are “posed”, i.e. the subjects know they are being photographed, and/or the images are selected for publication in public media. Hence, blurry, occluded and badly illuminated images are generally uncommon in these datasets. In addition, most of these challenges are close-set, i.e. the list of subjects in the gallery is the same as the one used for testing. +- add tabulator module for dates +- parse dates into CSV using Python +- get google image showing line of sight? +- fix up quote/citations -This challenge explores more unconstrained data, by introducing the new UnConstrained College Students (UCCS) dataset, where subjects are photographed using a long-range high-resolution surveillance camera without their knowledge. Faces inside these images are of various poses, and varied levels of blurriness and occlusion. The challenge also creates an open set recognition problem, where unknown people will be seen during testing and must be rejected. +### footnotes -With this challenge, we hope to foster face detection and recognition research towards surveillance applications that are becoming more popular and more required nowadays, and where no automatic recognition algorithm has proven to be useful yet. - -UnConstrained College Students (UCCS) Dataset - -The UCCS dataset was collected over several months using Canon 7D camera fitted with Sigma 800mm F5.6 EX APO DG HSM lens, taking images at one frame per second, during times when many students were walking on the sidewalk. " \ No newline at end of file +[^funding_sb]: Sapkota, Archana and Boult, Terrance. "Large Scale Unconstrained Open Set Face Database." 2013. +[^funding_uccs]: Günther, M. et. al. "Unconstrained Face Detection and Open-Set Face Recognition Challenge," 2018. Arxiv 1708.02337v3. \ No newline at end of file diff --git a/site/includes/map.html b/site/includes/map.html index 30c248a6..7511d4c7 100644 --- a/site/includes/map.html +++ b/site/includes/map.html @@ -12,7 +12,7 @@ -->

      - To help understand how {{ metadata.meta.dataset.name_display }} has been used around the world for commercial, military and academic research; publicly available research citing {{ metadata.meta.dataset.name_full} is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. + To help understand how {{ metadata.meta.dataset.name_display }} has been used around the world for commercial, military and academic research; publicly available research citing {{ metadata.meta.dataset.name_full }} is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location.

      -- cgit v1.2.3-70-g09d2 From b8f5c87e823d0b68d5e30f8de453ba90dcadc241 Mon Sep 17 00:00:00 2001 From: Jules Laplace Date: Tue, 2 Apr 2019 14:38:27 +0200 Subject: sidebar from spreadsheet --- megapixels/app/site/loader.py | 38 ++++++++++++++++++++++ megapixels/app/site/parser.py | 11 +------ site/assets/css/css.css | 11 +------ site/content/pages/datasets/brainwash/index.md | 11 +------ site/includes/sidebar.html | 6 ++++ .../datasets/50_people_one_question/index.html | 4 +-- site/public/datasets/brainwash/index.html | 24 ++++++++++++-- site/public/datasets/celeba/index.html | 4 +-- site/public/datasets/cofw/index.html | 4 +-- site/public/datasets/duke_mtmc/index.html | 4 +-- site/public/datasets/facebook/index.html | 3 +- site/public/datasets/hrt_transgender/index.html | 4 +-- site/public/datasets/lfw/index.html | 4 +-- site/public/datasets/market_1501/index.html | 4 +-- site/public/datasets/msceleb/index.html | 4 +-- site/public/datasets/pipa/index.html | 4 +-- site/public/datasets/uccs/index.html | 4 +-- site/public/datasets/viper/index.html | 4 +-- site/public/research/index.html | 18 ++++++++-- 19 files changed, 109 insertions(+), 57 deletions(-) create mode 100644 site/includes/sidebar.html (limited to 'site/content/pages/datasets') diff --git a/megapixels/app/site/loader.py b/megapixels/app/site/loader.py index 779f68ba..701c78b2 100644 --- a/megapixels/app/site/loader.py +++ b/megapixels/app/site/loader.py @@ -5,6 +5,9 @@ import glob import app.settings.app_cfg as cfg from app.utils.file_utils import load_json +import app.utils.sheet_utils as sheet + +sidebar = sheet.fetch_google_lookup("sidebar", item_key="key") def read_metadata(fn): """ @@ -20,6 +23,12 @@ def read_metadata(fn): sections = data.split("\n\n") return parse_metadata(fn, sections) +def domainFromUrl(url): + domain = url.split('/')[2].split('.') + if len(domain) > 2 and len(domain[-2]) == 2: + return ".".join(domain[-3:]) + return ".".join(domain[-2:]) + default_metadata = { 'status': 'published', @@ -33,6 +42,18 @@ default_metadata = { 'tagline': '', } +sidebar_order = [ + { 'key': 'published', 'title': 'Published' }, + { 'key': 'images', 'title': 'Images' }, + { 'key': 'videos', 'title': 'Videos' }, + { 'key': 'identities', 'title': 'Identities' }, + { 'key': 'purpose', 'title': 'Purpose' }, + { 'key': 'created_by', 'title': 'Created by' }, + { 'key': 'funded_by_short', 'title': 'Funded by' }, + { 'key': 'size_gb', 'title': 'Download Size' }, + { 'key': 'website', 'title': 'Website' }, +] + def parse_metadata(fn, sections): """ parse the metadata headers in a markdown file @@ -87,8 +108,25 @@ def parse_metadata(fn, sections): print("Bad metadata? {}".format(dataset_path)) elif 'datasets' in fn: print("/!\\ {} does not exist!".format(dataset_path)) + + if metadata['slug'] in sidebar: + sidebar_row = sidebar[metadata['slug']] + if sidebar_row: + metadata['sidebar'] = [] + for item in sidebar_order: + key = item['key'] + value = sidebar_row[key] + if value: + value = value.replace(' - ', ' – ') + if key == 'size_gb': + value += ' GB' + if key == 'website': + value = "" + domainFromUrl(value) + "" + metadata['sidebar'].append({ 'value': value, 'title': item['title'], }) + if 'meta' not in metadata or not metadata['meta']: # dude metadata['meta'] = {} + metadata['sidebar'] = [] return metadata, valid_sections diff --git a/megapixels/app/site/parser.py b/megapixels/app/site/parser.py index 06c45f41..dc2a09f2 100644 --- a/megapixels/app/site/parser.py +++ b/megapixels/app/site/parser.py @@ -55,7 +55,7 @@ def parse_markdown(metadata, sections, s3_path, skip_h1=False): elif '### statistics' in section.lower() or '### sidebar' in section.lower(): if len(current_group): groups.append(format_section(current_group, s3_path)) - current_group = [] + current_group = [format_include("{% include 'sidebar.html' %}", metadata)] if 'sidebar' not in section.lower(): current_group.append(section) in_stats = True @@ -267,15 +267,6 @@ def format_include(section, metadata): include_fn = section.strip().strip('\n').strip().strip('{%').strip().strip('%}').strip() include_fn = include_fn.strip('include').strip().strip('"').strip().strip("'").strip() return includes_env.get_template(include_fn).render(metadata=metadata) - # include_dir = cfg.DIR_SITE_INCLUDES - # try: - # includes_env.get_template(fp_html) - # with open(join(include_dir, fp_html), 'r') as fp: - # html = fp.read().replace('\n', '') - # return html - # except Exception as e: - # print(f'Error parsing include: {e}') - # return '' def format_applet(section, s3_path): """ diff --git a/site/assets/css/css.css b/site/assets/css/css.css index 0ee8a4f3..30663ef7 100644 --- a/site/assets/css/css.css +++ b/site/assets/css/css.css @@ -1,4 +1,4 @@ -da* { box-sizing: border-box; outline: 0; } +* { box-sizing: border-box; outline: 0; } html, body { margin: 0; padding: 0; @@ -278,11 +278,8 @@ p.subp{ color: #ccc; margin-bottom: 20px; font-family: 'Roboto', sans-serif; -} -.meta > div { margin-right: 20px; line-height: 17px - /*font-size:11px;*/ } .meta .gray { font-size: 9pt; @@ -316,12 +313,6 @@ p.subp{ .left-sidebar .meta, .right-sidebar .meta { flex-direction: column; } -.right-sidebar .meta > div { - margin-bottom: 10px; -} -.left-sidebar .meta > div { - margin-bottom: 15px; -} .right-sidebar ul { margin-bottom: 10px; color: #aaa; diff --git a/site/content/pages/datasets/brainwash/index.md b/site/content/pages/datasets/brainwash/index.md index 6d90e78f..db88d949 100644 --- a/site/content/pages/datasets/brainwash/index.md +++ b/site/content/pages/datasets/brainwash/index.md @@ -15,16 +15,7 @@ authors: Adam Harvey ------------ ### sidebar - -+ Published: 2015 -+ Images: 11,918 -+ Faces: 91,146 -+ Created by: Stanford University (US)
      Max Planck Institute for Informatics (DE) -+ Funded by: Max Planck Center for Visual Computing and Communication -+ Purpose: Head detection -+ Download Size: 4.1GB -+ Website: stanford.edu - +### end sidebar ## Brainwash Dataset diff --git a/site/includes/sidebar.html b/site/includes/sidebar.html new file mode 100644 index 00000000..0f7d2dad --- /dev/null +++ b/site/includes/sidebar.html @@ -0,0 +1,6 @@ +{% for item in metadata.sidebar %} +
      +
      {{ item.title }}
      +
      {{ item.value }}
      +
      +{% endfor %} \ No newline at end of file diff --git a/site/public/datasets/50_people_one_question/index.html b/site/public/datasets/50_people_one_question/index.html index 540e2d0d..1b03fc7e 100644 --- a/site/public/datasets/50_people_one_question/index.html +++ b/site/public/datasets/50_people_one_question/index.html @@ -27,7 +27,8 @@
      People One Question is a dataset of people from an online video series on YouTube and Vimeo used for building facial recogntion algorithms
      People One Question dataset includes ... -

      50 People 1 Question

      +

      50 People 1 Question

      (PAGE UNDER DEVELOPMENT)

      At vero eos et accusamus et iusto odio dignissimos ducimus, qui blanditiis praesentium voluptatum deleniti atque corrupti, quos dolores et quas molestias excepturi sint, obcaecati cupiditate non-provident, similique sunt in culpa, qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio.

      Nam libero tempore, cum soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptates repudiandae sint et molestiae non-recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat

      @@ -71,7 +72,6 @@
      -->
      -
      diff --git a/site/public/datasets/brainwash/index.html b/site/public/datasets/brainwash/index.html index 5e8f3a4c..c0830a96 100644 --- a/site/public/datasets/brainwash/index.html +++ b/site/public/datasets/brainwash/index.html @@ -27,7 +27,28 @@
      Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco in 2014
      The Brainwash dataset includes 11,918 images of "everyday life of a busy downtown cafe" and is used for training head detection surveillance algorithms -

      Brainwash Dataset

      +

      Brainwash Dataset

      Brainwash is a head detection dataset created from San Francisco's Brainwash Cafe livecam footage. It includes 11,918 images of "everyday life of a busy downtown cafe" 1 captured at 100 second intervals throught the entire day. Brainwash dataset was captured during 3 days in 2014: October 27, November 13, and November 24. According the author's reserach paper introducing the dataset, the images were acquired with the help of Angelcam.com [cite orig paper].

      Brainwash is not a widely used dataset but since its publication by Stanford University in 2015, it has notably appeared in several research papers from the National University of Defense Technology in Changsha, China. In 2016 and in 2017 researchers there conducted studies on detecting people's heads in crowded scenes for the purpose of surveillance 2 3.

      If you happen to have been at Brainwash cafe in San Franscisco at any time on October 26, November 13, or November 24 in 2014 you are most likely included in the Brainwash dataset.

      @@ -94,7 +115,6 @@
      -
      diff --git a/site/public/datasets/celeba/index.html b/site/public/datasets/celeba/index.html index f1ee0c22..ef7a3b27 100644 --- a/site/public/datasets/celeba/index.html +++ b/site/public/datasets/celeba/index.html @@ -27,7 +27,8 @@
      CelebA is a dataset of people...
      CelebA includes... -

      CelebA

      +

      CelebA

      (PAGE UNDER DEVELOPMENT)

      At vero eos et accusamus et iusto odio dignissimos ducimus, qui blanditiis praesentium voluptatum deleniti atque corrupti, quos dolores et quas molestias excepturi sint, obcaecati cupiditate non-provident, similique sunt in culpa, qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio.

      Nam libero tempore, cum soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptates repudiandae sint et molestiae non-recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat

      @@ -71,7 +72,6 @@
      -->
      -
      diff --git a/site/public/datasets/cofw/index.html b/site/public/datasets/cofw/index.html index 1f5aa315..3520aaa2 100644 --- a/site/public/datasets/cofw/index.html +++ b/site/public/datasets/cofw/index.html @@ -26,7 +26,8 @@
      -

      Caltech Occluded Faces in the Wild

      +

      Caltech Occluded Faces in the Wild

      (PAGE UNDER DEVELOPMENT)

      COFW is "is designed to benchmark face landmark algorithms in realistic conditions, which include heavy occlusions and large shape variations" [Robust face landmark estimation under occlusion].

      RESEARCH below this line

      @@ -81,7 +82,6 @@ To increase the number of training images, and since COFW has the exact same la
      -->
      -
      diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html index 83050506..c3e84053 100644 --- a/site/public/datasets/duke_mtmc/index.html +++ b/site/public/datasets/duke_mtmc/index.html @@ -27,7 +27,8 @@
      Duke MTMC is a dataset of surveillance camera footage of students on Duke University campus
      Duke MTMC contains over 2 million video frames and 2,000 unique identities collected from 8 HD cameras at Duke University campus in March 2014 -

      Duke Multi-Target, Multi-Camera Tracking Dataset (Duke MTMC)

      +

      Duke Multi-Target, Multi-Camera Tracking Dataset (Duke MTMC)

      [ PAGE UNDER DEVELOPMENT ]

      Duke MTMC is a dataset of video recorded on Duke University campus during for the purpose of training, evaluating, and improving multi-target multi-camera tracking. The videos were recorded during February and March 2014 and cinclude

      Includes a total of 888.8 minutes of video (ind. verified)

      @@ -89,7 +90,6 @@
      -
      diff --git a/site/public/datasets/facebook/index.html b/site/public/datasets/facebook/index.html index 7fb1901a..e9adb3f2 100644 --- a/site/public/datasets/facebook/index.html +++ b/site/public/datasets/facebook/index.html @@ -27,7 +27,8 @@
      TBD
      TBD -
      TBD

      Statistics

      +
      TBD

      {% include 'sidebar.html' %}

      +

      Statistics

      Years
      2002-2004
      Images
      13,233
      Identities
      5,749
      Origin
      Yahoo News Images
      Funding
      (Possibly, partially CIA)

      Ignore content below these lines

      • Tool to create face datasets from Facebook https://github.com/ankitaggarwal011/FaceGrab
      • diff --git a/site/public/datasets/hrt_transgender/index.html b/site/public/datasets/hrt_transgender/index.html index 528d1c3d..3215fb5d 100644 --- a/site/public/datasets/hrt_transgender/index.html +++ b/site/public/datasets/hrt_transgender/index.html @@ -27,7 +27,8 @@
        TBD
        TBD -

        HRT Transgender Dataset

        +

      HRT Transgender Dataset

      Who used HRT Transgender?

      @@ -83,7 +84,6 @@
      -->
      -
      diff --git a/site/public/datasets/lfw/index.html b/site/public/datasets/lfw/index.html index 5f076fc7..562169e4 100644 --- a/site/public/datasets/lfw/index.html +++ b/site/public/datasets/lfw/index.html @@ -27,7 +27,8 @@
      Labeled Faces in The Wild (LFW) is the first facial recognition dataset created entirely from online photos
      It includes 13,456 images of 4,432 people's images copied from the Internet during 2002-2004 and is the most frequently used dataset in the world for benchmarking face recognition algorithms. -
      -
      diff --git a/site/public/datasets/market_1501/index.html b/site/public/datasets/market_1501/index.html index 951646e3..7d9f87f6 100644 --- a/site/public/datasets/market_1501/index.html +++ b/site/public/datasets/market_1501/index.html @@ -27,7 +27,8 @@
      Market-1501 is a dataset is collection of CCTV footage from ...
      The Market-1501 dataset includes ... -

      Market-1501 ...

      +

      Market-1501 ...

      (PAGE UNDER DEVELOPMENT)

      @@ -69,7 +70,6 @@
      -->
      -
      diff --git a/site/public/datasets/msceleb/index.html b/site/public/datasets/msceleb/index.html index 9a671c8e..ecab4c3a 100644 --- a/site/public/datasets/msceleb/index.html +++ b/site/public/datasets/msceleb/index.html @@ -27,7 +27,8 @@
      MS Celeb is a dataset of web images used for training and evaluating face recognition algorithms
      The MS Celeb dataset includes over 10,000,000 images and 93,000 identities of semi-public figures collected using the Bing search engine -

      Microsoft Celeb Dataset (MS Celeb)

      +

      Microsoft Celeb Dataset (MS Celeb)

      (PAGE UNDER DEVELOPMENT)

      At vero eos et accusamus et iusto odio dignissimos ducimus, qui blanditiis praesentium voluptatum deleniti atque corrupti, quos dolores et quas molestias excepturi sint, obcaecati cupiditate non-provident, similique sunt in culpa, qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio.

      Nam libero tempore, cum soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptates repudiandae sint et molestiae non-recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat

      @@ -87,7 +88,6 @@ -->

      Add more analysis here

      -
      diff --git a/site/public/datasets/pipa/index.html b/site/public/datasets/pipa/index.html index fe6a4742..ff4302eb 100644 --- a/site/public/datasets/pipa/index.html +++ b/site/public/datasets/pipa/index.html @@ -27,7 +27,8 @@
      is a dataset...
      PIPA subdescription -

      Dataset Title TBD

      +

      Dataset Title TBD

      (PAGE UNDER DEVELOPMENT)

      @@ -69,7 +70,6 @@
      -->
      -
      diff --git a/site/public/datasets/uccs/index.html b/site/public/datasets/uccs/index.html index 10b7603e..0925763b 100644 --- a/site/public/datasets/uccs/index.html +++ b/site/public/datasets/uccs/index.html @@ -27,7 +27,8 @@
      Unconstrained College Students (UCCS) is a dataset of long-range surveillance photos of students taken without their knowledge
      The UCCS dataset includes 16,149 images and 1,732 identities of students at University of Colorado Colorado Springs campus and is used for face recognition and face detection -

      Unconstrained College Students ...

      +

      Unconstrained College Students ...

      (PAGE UNDER DEVELOPMENT)

       The pixel-average of all Uconstrained College Students images is shown with all 51,838 face annotations. (c) Adam Harvey
      The pixel-average of all Uconstrained College Students images is shown with all 51,838 face annotations. (c) Adam Harvey
      @@ -84,7 +85,6 @@
      -
      diff --git a/site/public/datasets/viper/index.html b/site/public/datasets/viper/index.html index cc4272c8..b838c2b9 100644 --- a/site/public/datasets/viper/index.html +++ b/site/public/datasets/viper/index.html @@ -27,7 +27,8 @@
      VIPeR is a person re-identification dataset of images captured at UC Santa Cruz in 2007
      VIPeR contains 1,264 images and 632 persons on the UC Santa Cruz campus and is used to train person re-identification algorithms for surveillance -

      VIPeR Dataset

      +

      VIPeR Dataset

      (PAGE UNDER DEVELOPMENT)

      VIPeR (Viewpoint Invariant Pedestrian Recognition) is a dataset of pedestrian images captured at University of California Santa Cruz in 2007. Accoriding to the reserachers 2 "cameras were placed in different locations in an academic setting and subjects were notified of the presence of cameras, but were not coached or instructed in any way."

      VIPeR is amongst the most widely used publicly available person re-identification datasets. In 2017 the VIPeR dataset was combined into a larger person re-identification created by the Chinese University of Hong Kong called PETA (PEdesTrian Attribute).

      @@ -86,7 +87,6 @@
      -->
      -
      diff --git a/site/public/research/index.html b/site/public/research/index.html index 303732f8..0ef57043 100644 --- a/site/public/research/index.html +++ b/site/public/research/index.html @@ -26,8 +26,22 @@
      -

      Research Blog

      -
      +
      +

      Research

      +
      +
      +
      Posted
      +
      2018-12-15
      +
      +
      +
      By
      +
      Adam Harvey
      +
      + +
      +
      + +
      -- cgit v1.2.3-70-g09d2