From b73e233acec5ad6c3aca7475288482f366f7a31f Mon Sep 17 00:00:00 2001
From: adamhrv [ page under development ] [ page under development ] [ page under development ] Brainwash is a head detection dataset created from San Francisco's Brainwash Cafe livecam footage. It includes 11,918 images of "everyday life of a busy downtown cafe" 1 captured at 100 second intervals throught the entire day. Brainwash dataset was captured during 3 days in 2014: October 27, November 13, and November 24. According the author's reserach paper introducing the dataset, the images were acquired with the help of Angelcam.com. 2 Brainwash is a head detection dataset created from San Francisco's Brainwash Cafe livecam footage. It includes 11,918 images of "everyday life of a busy downtown cafe" 1 captured at 100 second intervals throught the entire day. Brainwash dataset was captured during 3 days in 2014: October 27, November 13, and November 24. According the author's reserach paper introducing the dataset, the images were acquired with the help of Angelcam.com. 2 Brainwash is not a widely used dataset but since its publication by Stanford University in 2015, it has notably appeared in several research papers from the National University of Defense Technology in Changsha, China. In 2016 and in 2017 researchers there conducted studies on detecting people's heads in crowded scenes for the purpose of surveillance. 3 4 If you happen to have been at Brainwash cafe in San Francisco at any time on October 26, November 13, or November 24 in 2014 you are most likely included in the Brainwash dataset and have unwittingly contributed to surveillance research. [ page under development ] [ PAGE UNDER DEVELOPMENT ] [ PAGE UNDER DEVELOPMENT ] [ PAGE UNDER DEVELOPMENT ] [ page under development ] Duke MTMC (Multi-Target, Multi-Camera Tracking) is a dataset of video recorded on Duke University campus for research and development of networked camera surveillance systems. MTMC tracking algorithms are used for citywide dragnet surveillance systems such as those used throughout China by SenseTime 1 and the oppressive monitoring of 2.5 million Uyghurs in Xinjiang by SenseNets 2. In fact researchers from both SenseTime 4 5 and SenseNets 3 used the Duke MTMC dataset for their research. In this investigation into the Duke MTMC dataset, we found that researchers at Duke Univesity in Durham, North Carolina captured over 2,000 students, faculty members, and passersby into one of the most prolific public surveillance research datasets that's used around the world by commercial and defense surveillance organizations. In this investigation into the Duke MTMC dataset, we found that researchers at Duke University in Durham, North Carolina captured over 2,000 students, faculty members, and passersby into one of the most prolific public surveillance research datasets that's used around the world by commercial and defense surveillance organizations. Since it's publication in 2016, the Duke MTMC dataset has been used in over 100 studies at organizations around the world including SenseTime 4 5, SenseNets 3, IARPA and IBM 9, Chinese National University of Defense 7 8, US Department of Homeland Security 10, Tencent, Microsoft, Microsft Asia, Fraunhofer, Senstar Corp., Alibaba, Naver Labs, Google and Hewlett-Packard Labs to name only a few. The creation and publication of the Duke MTMC dataset in 2014 (published in 2016) was originally funded by the U.S. Army Research Laboratory and the National Science Foundation 6. Though our analysis of the geographic locations of the publicly available research shows over twice as many citations by researchers from China (44% China, 20% United States). In 2018 alone, there were 70 research project citations from China. The 8 cameras deployed on Duke's campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy". 6. Camera 5 was positioned to capture students as entering and exiting the university's main chapel. Each camera's location and approximate field of view. The heat map visualization shows the locations where pedestrians were most frequently annotated in each video from the Duke MTMC datset. The 8 cameras deployed on Duke's campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy". 6. Camera 5 was positioned to capture students as entering and exiting the university's main chapel. Each camera's location and approximate field of view. The heat map visualization shows the locations where pedestrians were most frequently annotated in each video from the Duke MTMC dataset.50 People 1 Question
+50 People 1 Question
-Who used 50 People One Question Dataset?
diff --git a/site/public/datasets/afad/index.html b/site/public/datasets/afad/index.html
index df14e7cd..832ce86a 100644
--- a/site/public/datasets/afad/index.html
+++ b/site/public/datasets/afad/index.html
@@ -26,7 +26,8 @@
Who used Asian Face Age Dataset?
diff --git a/site/public/datasets/brainwash/index.html b/site/public/datasets/brainwash/index.html
index 03331a2d..494856ec 100644
--- a/site/public/datasets/brainwash/index.html
+++ b/site/public/datasets/brainwash/index.html
@@ -27,7 +27,8 @@
Brainwash Dataset
+Brainwash Dataset
-Who used Brainwash Dataset?
diff --git a/site/public/datasets/celeba/index.html b/site/public/datasets/celeba/index.html
index c4caef20..e42ceb6f 100644
--- a/site/public/datasets/celeba/index.html
+++ b/site/public/datasets/celeba/index.html
@@ -27,7 +27,8 @@
CelebA Dataset
+CelebA Dataset
-Who used CelebA Dataset?
diff --git a/site/public/datasets/cofw/index.html b/site/public/datasets/cofw/index.html
index 4851e256..39e9680b 100644
--- a/site/public/datasets/cofw/index.html
+++ b/site/public/datasets/cofw/index.html
@@ -26,7 +26,8 @@
Who used COFW Dataset?
diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html
index ba32484a..78067101 100644
--- a/site/public/datasets/duke_mtmc/index.html
+++ b/site/public/datasets/duke_mtmc/index.html
@@ -27,7 +27,8 @@
Duke MTMC
+Duke MTMC
+ ![]()
![]()
![]()
![]()
![]()
Who used Duke MTMC Dataset?
@@ -217,7 +218,11 @@ under Grants IIS-10-17017 and IIS-14-20894.
https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/
"Attention-Aware Compositional Network for Person Re-identification". 2018. SemanticScholar, PDF
"End-to-End Deep Kronecker-Product Matching for Person Re-identification". 2018. SemanticScholar, PDF
diff --git a/site/public/datasets/feret/index.html b/site/public/datasets/feret/index.html index 089cd351..929041df 100644 --- a/site/public/datasets/feret/index.html +++ b/site/public/datasets/feret/index.html @@ -26,7 +26,8 @@The FERET program is sponsored by the U.S. Depart- ment of Defense’s Counterdrug Technology Development Program Office. The U.S. Army Research Laboratory (ARL) is the technical agent for the FERET program. ARL designed, administered, and scored the FERET tests. George Mason University collected, processed, and main- tained the FERET database. Inquiries regarding the FERET database or test should be directed to P. Jonathon Phillips.
[ page under development ]
+[ page under development ]
{% include 'dashboard.html' }
diff --git a/site/public/datasets/lfw/index.html b/site/public/datasets/lfw/index.html index 60a6bf0e..1907f959 100644 --- a/site/public/datasets/lfw/index.html +++ b/site/public/datasets/lfw/index.html @@ -27,7 +27,8 @@
[ PAGE UNDER DEVELOPMENT ]
+[ PAGE UNDER DEVELOPMENT ]
Labeled Faces in The Wild (LFW) is "a database of face photographs designed for studying the problem of unconstrained face recognition 1. It is used to evaluate and improve the performance of facial recognition algorithms in academic, commercial, and government research. According to BiometricUpdate.com 3, LFW is "the most widely used evaluation set in the field of facial recognition, LFW attracts a few dozen teams from around the globe including Google, Facebook, Microsoft Research Asia, Baidu, Tencent, SenseTime, Face++ and Chinese University of Hong Kong."
The LFW dataset includes 13,233 images of 5,749 people that were collected between 2002-2004. LFW is a subset of Names of Faces and is part of the first facial recognition training dataset created entirely from images appearing on the Internet. The people appearing in LFW are...
The Names and Faces dataset was the first face recognition dataset created entire from online photos. However, Names and Faces and LFW are not the first face recognition dataset created entirely "in the wild". That title belongs to the UCD dataset. Images obtained "in the wild" means using an image without explicit consent or awareness from the subject or photographer.
diff --git a/site/public/datasets/market_1501/index.html b/site/public/datasets/market_1501/index.html index 72807efc..ad6bf458 100644 --- a/site/public/datasets/market_1501/index.html +++ b/site/public/datasets/market_1501/index.html @@ -27,7 +27,8 @@[ PAGE UNDER DEVELOPMENT]
+[ PAGE UNDER DEVELOPMENT]
[ PAGE UNDER DEVELOPMENT ]
+[ PAGE UNDER DEVELOPMENT ]
https://www.hrw.org/news/2019/01/15/letter-microsoft-face-surveillance-technology
The Oxford Town Centre dataset is a CCTV video of pedestrians in a busy downtown area in Oxford used for research and development of activity and face recognition systems. 1 The CCTV video was obtained from a public surveillance camera at the corner of Cornmarket and Market St. in Oxford, England and includes approximately 2,200 people. Since its publication in 2009 2 the Oxford Town Centre dataset has been used in over 80 verified research projects including commercial research by Amazon, Disney, OSRAM, and Huawei; and academic research in China, Israel, Russia, Singapore, the US, and Germany among dozens more.
+The Oxford Town Centre dataset is a CCTV video of pedestrians in a busy downtown area in Oxford used for research and development of activity and face recognition systems. 1 The CCTV video was obtained from a public surveillance camera at the corner of Cornmarket and Market St. in Oxford, England and includes approximately 2,200 people. Since its publication in 2009 2 the Oxford Town Centre dataset has been used in over 80 verified research projects including commercial research by Amazon, Disney, OSRAM, and Huawei; and academic research in China, Israel, Russia, Singapore, the US, and Germany among dozens more.
The Oxford Town Centre dataset is unique in that it uses footage from a public surveillance camera that would otherwise be designated for public safety. The video shows that the pedestrians act normally and unrehearsed indicating they neither knew of or consented to participation in the research project.
[ PAGE UNDER DEVELOPMENT ]
+[ PAGE UNDER DEVELOPMENT ]
[ PAGE UNDER DEVELOPMENT ]
+[ PAGE UNDER DEVELOPMENT ]
UnConstrained College Students (UCCS) is a dataset of long-range surveillance photos captured at University of Colorado Colorado Springs. According to the authors of two papers associated with the dataset, over 1,700 students and pedestrians were "photographed using a long-range high-resolution surveillance camera without their knowledge" 2. In this investigation, we examine the funding sources, contents of the dataset, photo EXIF data, and publicy available research project citations.
According to the author's of the the UnConstrained College Students dataset it is primarliy used for research and development of "face detection and recognition research towards surveillance applications that are becoming more popular and more required nowadays, and where no automatic recognition algorithm has proven to be useful yet." Applications of this technology include usage by defense and intelligence agencies, who were also the primary funding sources of the UCCS dataset.
In the two papers associated with the release of the UCCS dataset (Unconstrained Face Detection and Open-Set Face Recognition Challenge and Large Scale Unconstrained Open Set Face Database), the researchers disclosed their funding sources as ODNI (United States Office of Director of National Intelligence), IARPA (Intelligence Advance Research Projects Activity), ONR MURI (Office of Naval Research and The Department of Defense Multidisciplinary University Research Initiative), Army SBIR (Small Business Innovation Research), SOCOM SBIR (Special Operations Command and Small Business Innovation Research), and the National Science Foundation. Further, UCCS's VAST site explicity states they are part of the IARPA Janus, a face recognition project developed to serve the needs of national intelligence interests.
The UCCS dataset includes the highest resolution images of any publicly available face recognition dataset discovered so far (18MP) and was, as of 2018, the "largest surveillance FR benchmark in the public domain." 3 To create the dataset, the researchers used a Canon 7D digital camera fitted with a Sigma 800mm telephoto lens and photographed students from a distance of 150–200m through their office window. Photos were taken during the morning and afternoon while students were walking to and from classes. According to an analysis of the EXIF data embedded in the photos, nearly half of the 16,149 photos were taken on Tuesdays. The most popular time was during lunch break. All of the photos were taken during the spring semester in 2012 and 2013 but the dataset was not publicy released until 2016.
diff --git a/site/public/datasets/vgg_face2/index.html b/site/public/datasets/vgg_face2/index.html index 75d73824..3c2859a5 100644 --- a/site/public/datasets/vgg_face2/index.html +++ b/site/public/datasets/vgg_face2/index.html @@ -26,7 +26,8 @@[ page under development ]
[ page under development ]
+[ page under development ]
VIPeR (Viewpoint Invariant Pedestrian Recognition) is a dataset of pedestrian images captured at University of California Santa Cruz in 2007. Accoriding to the reserachers 2 "cameras were placed in different locations in an academic setting and subjects were notified of the presence of cameras, but were not coached or instructed in any way."
VIPeR is amongst the most widely used publicly available person re-identification datasets. In 2017 the VIPeR dataset was combined into a larger person re-identification created by the Chinese University of Hong Kong called PETA (PEdesTrian Attribute).
[ page under development ]
+[ page under development ]