summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authoradamhrv <adam@ahprojects.com>2019-04-17 16:51:44 +0200
committeradamhrv <adam@ahprojects.com>2019-04-17 16:51:44 +0200
commit43c3e3904f80eb56769fba4634729d0e567f9a32 (patch)
tree757c69f2d1065fae4ca05570026175481cd74216
parent9d6d12f0b16d10219c62f25ce036b9377417de70 (diff)
update duke
-rw-r--r--site/content/pages/datasets/duke_mtmc/index.md71
-rw-r--r--site/public/about/assets/LICENSE/index.html22
-rw-r--r--site/public/about/attribution/index.html22
-rw-r--r--site/public/about/index.html30
-rw-r--r--site/public/about/legal/index.html22
-rw-r--r--site/public/about/press/index.html22
-rw-r--r--site/public/datasets/50_people_one_question/index.html22
-rw-r--r--site/public/datasets/afad/index.html22
-rw-r--r--site/public/datasets/brainwash/index.html23
-rw-r--r--site/public/datasets/caltech_10k/index.html22
-rw-r--r--site/public/datasets/celeba/index.html22
-rw-r--r--site/public/datasets/cofw/index.html22
-rw-r--r--site/public/datasets/duke_mtmc/index.html231
-rw-r--r--site/public/datasets/feret/index.html22
-rw-r--r--site/public/datasets/hrt_transgender/index.html22
-rw-r--r--site/public/datasets/index.html24
-rw-r--r--site/public/datasets/lfpw/index.html22
-rw-r--r--site/public/datasets/lfw/index.html22
-rw-r--r--site/public/datasets/market_1501/index.html22
-rw-r--r--site/public/datasets/msceleb/index.html22
-rw-r--r--site/public/datasets/oxford_town_centre/index.html22
-rw-r--r--site/public/datasets/pipa/index.html22
-rw-r--r--site/public/datasets/pubfig/index.html22
-rw-r--r--site/public/datasets/uccs/index.html33
-rw-r--r--site/public/datasets/vgg_face2/index.html22
-rw-r--r--site/public/datasets/viper/index.html22
-rw-r--r--site/public/datasets/youtube_celebrities/index.html22
-rw-r--r--site/public/info/index.html22
-rw-r--r--site/public/research/00_introduction/index.html22
-rw-r--r--site/public/research/01_from_1_to_100_pixels/index.html22
-rw-r--r--site/public/research/02_what_computers_can_see/index.html22
-rw-r--r--site/public/research/index.html22
-rw-r--r--site/public/test/chart/index.html22
-rw-r--r--site/public/test/citations/index.html22
-rw-r--r--site/public/test/csv/index.html22
-rw-r--r--site/public/test/datasets/index.html22
-rw-r--r--site/public/test/face_search/index.html22
-rw-r--r--site/public/test/gallery/index.html22
-rw-r--r--site/public/test/index.html22
-rw-r--r--site/public/test/map/index.html22
-rw-r--r--site/public/test/name_search/index.html22
-rw-r--r--site/public/test/pie_chart/index.html22
42 files changed, 686 insertions, 518 deletions
diff --git a/site/content/pages/datasets/duke_mtmc/index.md b/site/content/pages/datasets/duke_mtmc/index.md
index ac0a3f2e..2a8bfe05 100644
--- a/site/content/pages/datasets/duke_mtmc/index.md
+++ b/site/content/pages/datasets/duke_mtmc/index.md
@@ -18,35 +18,63 @@ authors: Adam Harvey
### sidebar
### end sidebar
-[ page under development ]
+Duke MTMC (Multi-Target, Multi-Camera) is a dataset of surveillance video footage taken on Duke University's campus in 2014 and is used for research and development of video tracking systems, person re-identification, and low-resolution facial recognition. The dataset contains over 14 hours of synchronized surveillance video from 8 cameras at 1080p and 60FPS with over 2 million frames of 2,000 students walking to and from classes. The 8 surveillance cameras deployed on campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy"[^duke_mtmc_orig].
-Duke MTMC (Multi-Target, Multi-Camera Tracking) is a dataset of video recorded on Duke University campus for research and development of networked camera surveillance systems. MTMC tracking algorithms are used for citywide dragnet surveillance systems such as those used throughout China by SenseTime[^sensetime_qz] and the oppressive monitoring of 2.5 million Uyghurs in Xinjiang by SenseNets[^sensenets_uyghurs]. In fact researchers from both SenseTime[^sensetime1] [^sensetime2] and SenseNets[^sensenets_sensetime] used the Duke MTMC dataset for their research.
+In this investigation into the Duke MTMC dataset we tracked down over 100 publicly available research papers that explicitly acknowledged using Duke MTMC. Our analysis shows that the dataset has spread far beyond its origins and intentions in academic research projects at Duke University. Since its publication in 2016, more than twice as many research citations originated in China as in the United States. Among these citations were papers with explicit and direct links to the Chinese military and several of the companies known to provide Chinese authorities with the oppressive surveillance technology used to monitor millions of Uighur Muslims.
-In this investigation into the Duke MTMC dataset, we found that researchers at Duke University in Durham, North Carolina captured over 2,000 students, faculty members, and passersby into one of the most prolific public surveillance research datasets that's used around the world by commercial and defense surveillance organizations.
+In one 2018 [paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_Attention-Aware_Compositional_Network_CVPR_2018_paper.pdf) jointly published by researchers from SenseNets and SenseTime (and funded by SenseTime Group Limited) entitled [Attention-Aware Compositional Network for Person Re-identification](https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e), the Duke MTMC dataset was used for "extensive experiments" on improving person re-identification across multiple surveillance cameras with important applications in "finding missing elderly and children, and suspect tracking, etc." Both SenseNets and SenseTime have been directly linked to the providing surveillance technology to monitor Uighur Muslims in China. [^sensetime_qz][^sensenets_uyghurs][^xinjiang_nyt]
-Since it's publication in 2016, the Duke MTMC dataset has been used in over 100 studies at organizations around the world including SenseTime[^sensetime1] [^sensetime2], SenseNets[^sensenets_sensetime], IARPA and IBM[^iarpa_ibm], Chinese National University of Defense [^cn_defense1][^cn_defense2], US Department of Homeland Security[^us_dhs], Tencent, Microsoft, Microsft Asia, Fraunhofer, Senstar Corp., Alibaba, Naver Labs, Google and Hewlett-Packard Labs to name only a few.
+![caption: A collection of 1,600 out of the approximately 2,000 students and pedestrians in the Duke MTMC dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification, and eventually the QMUL SurvFace face recognition dataset. Open Data Commons Attribution License.](assets/duke_mtmc_reid_montage.jpg)
-The creation and publication of the Duke MTMC dataset in 2014 (published in 2016) was originally funded by the U.S. Army Research Laboratory and the National Science Foundation[^duke_mtmc_orig]. Though our analysis of the geographic locations of the publicly available research shows over twice as many citations by researchers from China (44% China, 20% United States). In 2018 alone, there were 70 research project citations from China.
+Despite [repeated](https://www.hrw.org/news/2017/11/19/china-police-big-data-systems-violate-privacy-target-dissent) [warnings](https://www.hrw.org/news/2018/02/26/china-big-data-fuels-crackdown-minority-region) by Human Rights Watch that the authoritarian surveillance used in China represents a violation of human rights, researchers at Duke University continued to provide open access to their dataset for anyone to use for any project. As the surveillance crisis in China grew, so did the number of citations with links to organizations complicit in the crisis. In 2018 alone there were over 70 research projects happening in China that publicly acknowledged benefiting from the Duke MTMC dataset. Amongst these were projects from SenseNets, SenseTime, CloudWalk, Megvii, Beihang University, and the PLA's National University of Defense Technology.
-![caption: A collection of 1,600 out of the 2,700 students and passersby captured into the Duke MTMC surveillance research and development dataset on . These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification. Open Data Commons Attribution License.](assets/duke_mtmc_reid_montage.jpg)
+| Organization | Paper | Link | Year | Used Duke MTMC |
+|---|---|---|---|
+| SenseNets, SenseTime | Attention-Aware Compositional Network for Person Re-identification | [SemanticScholar](https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e) | 2018 | &#x2714; |
+|SenseTime| End-to-End Deep Kronecker-Product Matching for Person Re-identification | [thcvf.com](http://openaccess.thecvf.com/content_cvpr_2018/papers/Shen_End-to-End_Deep_Kronecker-Product_CVPR_2018_paper.pdf) | 2018| &#x2714; |
+|CloudWalk| Horizontal Pyramid Matching for Person Re-identification | [arxiv.org](https://arxiv.org/pdf/1804.05275.pdf) | 20xx | &#x2714; |
+| Megvii | Multi-Target, Multi-Camera Tracking by Hierarchical Clustering: Recent Progress on DukeMTMC Project | [SemanticScholar](https://www.semanticscholar.org/paper/Multi-Target%2C-Multi-Camera-Tracking-by-Hierarchical-Zhang-Wu/10c20cf47d61063032dce4af73a4b8e350bf1128) | 2018 | &#x2714; |
+| Megvii | Person Re-Identification (slides) | [github.io](https://zsc.github.io/megvii-pku-dl-course/slides/Lecture%2011,%20Human%20Understanding_%20ReID%20and%20Pose%20and%20Attributes%20and%20Activity%20.pdf) | 2017 | &#x2714; |
+| Megvii | SCPNet: Spatial-Channel Parallelism Network for Joint Holistic and Partial PersonRe-Identification | [arxiv.org](https://arxiv.org/abs/1810.06996) | 2018 | &#x2714; |
+| CloudWalk | CloudWalk re-identification technology extends facial biometric tracking with improved accuracy | [BiometricUpdate.com](https://www.biometricupdate.com/201903/cloudwalk-re-identification-technology-extends-facial-biometric-tracking-with-improved-accuracy) | 2018 | &#x2714; |
+| CloudWalk | Horizontal Pyramid Matching for Person Re-identification | [arxiv.org](https://arxiv.org/abs/1804.05275)] | 2018 | &#x2714; |
+| National University of Defense Technology | Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers | [SemanticScholar.org](https://www.semanticscholar.org/paper/Tracking-by-Animation%3A-Unsupervised-Learning-of-He-Liu/e90816e1a0e14ea1e7039e0b2782260999aef786) | 2018 | &#x2714; |
+| National University of Defense Technology | Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based Recurrent Attention Networks | [SemanticScholar.org](https://www.semanticscholar.org/paper/Unsupervised-Multi-Object-Detection-for-Video-Using-He-He/59f357015054bab43fb8cbfd3f3dbf17b1d1f881) | 2018 | &#x2714; |
+| Beihang University | Orientation-Guided Similarity Learning for Person Re-identification | [ieee.org](https://ieeexplore.ieee.org/document/8545620) | 2018 | &#x2714; |
+| Beihang University | Online Inter-Camera Trajectory Association Exploiting Person Re-Identification and Camera Topology | [acm.org](https://dl.acm.org/citation.cfm?id=3240663) | 2018 | &#x2714; |
-The 8 cameras deployed on Duke's campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".[^duke_mtmc_orig]. Camera 5 was positioned to capture students as entering and exiting the university's main chapel. Each camera's location and approximate field of view. The heat map visualization shows the locations where pedestrians were most frequently annotated in each video from the Duke MTMC dataset.
+The reasons that companies in China use the Duke MTMC dataset for research are technically no different than the reasons it is used in the United States and Europe. In fact the original creators of the dataset published a follow up report in 2017 titled [Tracking Social Groups Within and Across Cameras](https://www.semanticscholar.org/paper/Tracking-Social-Groups-Within-and-Across-Cameras-Solera-Calderara/9e644b1e33dd9367be167eb9d832174004840400) with specific applications to "automated analysis of crowds and social gatherings for surveillance and security applications". Their work, as well as the creation of the original dataset in 2014 were both supported in part by the United States Army Research Laboratory.
-![caption: Duke MTMC camera locations on Duke University campus. Open Data Commons Attribution License.](assets/duke_mtmc_camera_map.jpg)
+Citations from the United States and Europe show a similar trend to that in China, including publicly acknowledged and verified usage of the Duke MTMC dataset supported or carried out by the United States Department of Homeland Security, IARPA, IBM, Microsoft (who provides surveillance to ICE), and Vision Semantics (who works with the UK Ministry of Defence). One [paper](https://pdfs.semanticscholar.org/59f3/57015054bab43fb8cbfd3f3dbf17b1d1f881.pdf) is even jointly published by researchers affiliated with both the University College of London and the National University of Defense Technology in China.
-![caption: Duke MTMC camera views for 8 cameras deployed on campus &copy; megapixels.cc](assets/duke_mtmc_cameras.jpg)
+| Organization | Paper | Link | Year | Used Duke MTMC |
+|---|---|---|---|
+| IARPA, IBM, CloudWalk | Horizontal Pyramid Matching for Person Re-identification | [arxiv.org](https://arxiv.org/abs/1804.05275) | 2018 | &#x2714; |
+| Microsoft | ReXCam: Resource-Efficient, Cross-CameraVideo Analytics at Enterprise Scale | [arxiv.org](https://arxiv.org/abs/1811.01268) | 2018 | &#x2714; |
+| Microsoft | Scaling Video Analytics Systems to Large Camera Deployments | [arxiv.org](https://arxiv.org/pdf/1809.02318.pdf) | 2018 | &#x2714; |
+| University College of London, National University of Defense Technology | Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based RecurrentAttention Networks | [PDF](https://pdfs.semanticscholar.org/59f3/57015054bab43fb8cbfd3f3dbf17b1d1f881.pdf) | 2018 | &#x2714; |
+| Vision Semantics Ltd. | Unsupervised Person Re-identification by Deep Learning Tracklet Association | [arxiv.org](https://arxiv.org/abs/1809.02874) | 2018 | &#x2714; |
+| US Dept. of Homeland Security | Re-Identification with Consistent Attentive Siamese Networks | [arxiv.org](https://arxiv.org/abs/1811.07487/) | 2019 | &#x2714; |
+
+
+By some metrics the dataset is considered a huge success. It is regarded as highly influential research and has contributed to hundreds, if not thousands, of projects to advance artificial intelligence for person tracking and monitoring. All the above citations, regardless of which country is using it, align perfectly with the original [intent](http://vision.cs.duke.edu/DukeMTMC/) of the Duke MTMC dataset: "to accelerate advances in multi-target multi-camera tracking".
+
+The same logic applies for all the new extensions of the Duke MTMC dataset including [Duke MTMC Re-ID](https://github.com/layumi/DukeMTMC-reID_evaluation), [Duke MTMC Video Re-ID](https://github.com/Yu-Wu/DukeMTMC-VideoReID), Duke MTMC Groups, and [Duke MTMC Attribute](https://github.com/vana77/DukeMTMC-attribute). And it also applies to all the new specialized datasets that will be created from Duke MTMC, such as the low-resolution face recognition dataset called [QMUL-SurvFace](https://qmul-survface.github.io/), which was funded in part by [SeeQuestor](https://seequestor.com), a computer vision provider to law enforcement agencies including Scotland Yards and Queensland Police. From the perspective of academic researchers, companies, and defense agencies using these datasets to advance their organization's work, Duke MTMC contributes value their their bottom line. Regardless of who is using these datasets or how they're used, they are simple provided to make networks of surveillance cameras more powerful.
![caption: Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus &copy; megapixels.cc](assets/duke_mtmc_saliencies.jpg)
+But from a privacy and human rights perspective the creation and distribution of the Duke MTMC illustrates an egregious prioritization of surveillance technologies over individual rights, where the simple act of going to class could implicate your biometric data in a surveillance training dataset.
+
+For the approximately 2,000 students in Duke MTMC dataset there is unfortunately no escape. It would be impossible to remove oneself from all copies of the dataset downloaded around the world. Instead, over 2,000 students and visitors who happened to be walking to class on March 13, 2014 will forever remain in all downloaded copies of the Duke MTMC dataset and all its extensions, contributing to a global supply chain of data that powers governmental and commercial expansion of biometric surveillance technologies.
+
{% include 'dashboard.html' %}
{% include 'supplementary_header.html' %}
-#### Funding
+![caption: Duke MTMC camera locations on Duke University campus. Open Data Commons Attribution License.](assets/duke_mtmc_camera_map.jpg)
-Original funding for the Duke MTMC dataset was provided by the Army Research Office under Grant No. W911NF-10-1-0387 and by the National Science Foundation
-under Grants IIS-10-17017 and IIS-14-20894.
+![caption: Duke MTMC camera views for 8 cameras deployed on campus &copy; megapixels.cc](assets/duke_mtmc_cameras.jpg)
#### Video Timestamps
@@ -73,16 +101,10 @@ The video timestamps contain the likely, but not yet confirmed, date and times o
=== end columns
-### Opting Out
-
-If you attended Duke University and were captured by any of the 8 surveillance cameras positioned on campus in 2014, there is unfortunately no way to be removed. The dataset files have been distributed throughout the world and it would not be possible to contact all the owners for removal. Nor do the authors provide any options for students to opt-out, nor did they even inform students they would be used at test subjects for surveillance research and development in a project funded, in part, by the United States Army Research Office.
-
#### Notes
- The Duke MTMC dataset paper mentions 2,700 identities, but their ground truth file only lists annotations for 1,812
-{% include 'cite_our_work.html' %}
-
If you use any data from the Duke MTMC please follow their [license](http://vision.cs.duke.edu/DukeMTMC/#how-to-cite) and cite their work as:
<pre>
@@ -94,19 +116,16 @@ If you use any data from the Duke MTMC please follow their [license](http://visi
}
</pre>
+{% include 'cite_our_work.html' %}
+
+
#### ToDo
- clean up citations, formatting
### Footnotes
+[^xinjiang_nyt]: Mozur, Paul. "One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority". https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html. April 14, 2019.
[^sensetime_qz]: <https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/>
[^sensenets_uyghurs]: <https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/>
-[^sensenets_sensetime]: "Attention-Aware Compositional Network for Person Re-identification". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e), [PDF](http://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_Attention-Aware_Compositional_Network_CVPR_2018_paper.pdf)
-[^sensetime1]: "End-to-End Deep Kronecker-Product Matching for Person Re-identification". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/End-to-End-Deep-Kronecker-Product-Matching-for-Shen-Xiao/947954cafdefd471b75da8c3bb4c21b9e6d57838), [PDF](http://openaccess.thecvf.com/content_cvpr_2018/papers/Shen_End-to-End_Deep_Kronecker-Product_CVPR_2018_paper.pdf)
-[^sensetime2]: "Person Re-identification with Deep Similarity-Guided Graph Neural Network". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/Person-Re-identification-with-Deep-Graph-Neural-Shen-Li/08d2a558ea2deb117dd8066e864612bf2899905b)
-[^duke_mtmc_orig]: "Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking". 2016. [SemanticScholar](https://www.semanticscholar.org/paper/Performance-Measures-and-a-Data-Set-for-Tracking-Ristani-Solera/27a2fad58dd8727e280f97036e0d2bc55ef5424c)
-[^cn_defense1]: "Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/Tracking-by-Animation%3A-Unsupervised-Learning-of-He-Liu/e90816e1a0e14ea1e7039e0b2782260999aef786)
-[^cn_defense2]: "Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based Recurrent Attention Networks". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/Unsupervised-Multi-Object-Detection-for-Video-Using-He-He/59f357015054bab43fb8cbfd3f3dbf17b1d1f881)
-[^iarpa_ibm]: "Horizontal Pyramid Matching for Person Re-identification". 2019. [SemanticScholar](https://www.semanticscholar.org/paper/Horizontal-Pyramid-Matching-for-Person-Fu-Wei/c2a5f27d97744bc1f96d7e1074395749e3c59bc8)
-[^us_dhs]: "Re-Identification with Consistent Attentive Siamese Networks". 2018. [SemanticScholar](https://www.semanticscholar.org/paper/Re-Identification-with-Consistent-Attentive-Siamese-Zheng-Karanam/24d6d3adf2176516ef0de2e943ce2084e27c4f94) \ No newline at end of file
+[^duke_mtmc_orig]: "Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking". 2016. [SemanticScholar](https://www.semanticscholar.org/paper/Performance-Measures-and-a-Data-Set-for-Tracking-Ristani-Solera/27a2fad58dd8727e280f97036e0d2bc55ef5424c) \ No newline at end of file
diff --git a/site/public/about/assets/LICENSE/index.html b/site/public/about/assets/LICENSE/index.html
index 0d3a7878..66d8b3ac 100644
--- a/site/public/about/assets/LICENSE/index.html
+++ b/site/public/about/assets/LICENSE/index.html
@@ -40,17 +40,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/about/attribution/index.html b/site/public/about/attribution/index.html
index 0a1b8e0f..5fe92b8d 100644
--- a/site/public/about/attribution/index.html
+++ b/site/public/about/attribution/index.html
@@ -60,17 +60,17 @@ To Adapt: To modify, transform and build upon the database</p>
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/about/index.html b/site/public/about/index.html
index 4a4ab3c6..b83736d3 100644
--- a/site/public/about/index.html
+++ b/site/public/about/index.html
@@ -35,7 +35,8 @@
<li><a href="/about/legal/">Legal / Privacy</a></li>
</ul>
</section><p>MegaPixels is an independent art and research project by Adam Harvey and Jules LaPlace that investigates the ethics, origins, and individual privacy implications of face recognition image datasets and their role in the expansion of biometric surveillance technologies.</p>
-<p>The MegaPixels site is made possible with support from <a href="http://mozilla.org">Mozilla</a></p>
+<p>MegaPixels is made possible with support from <a href="http://mozilla.org">Mozilla</a>, our primary funding partner.</p>
+<p>Additional support for MegaPixels is provided by the European ARTificial Intelligence Network (AI LAB) at the Ars Electronica Center, 1-year research-in-residence grant from Karlsruhe HfG, and sales from the Privacy Gift Shop.</p>
<div class="flex-container team-photos-container">
<div class="team-member">
<h3>Adam Harvey</h3>
@@ -75,6 +76,11 @@ You are free:</li>
<li>PDFMiner.Six and Pandas for research paper data analysis</li>
</ul>
</div></div></section><section><p>Please direct questions, comments, or feedback to <a href="https://mastodon.social/@adamhrv">mastodon.social/@adamhrv</a></p>
+<h4>Funding Partners</h4>
+<p>The MegaPixels website, research, and development is made possible with support form Mozilla, our primary funding partner.</p>
+<p>[ add logos ]</p>
+<p>Additional support is provided by the European ARTificial Intelligence Network (AI LAB) at the Ars Electronica Center and a 1-year research-in-residence grant from Karlsruhe HfG.</p>
+<p>[ add logos ]</p>
<h5>Attribution</h5>
<p>If you use MegaPixels or any data derived from it for your work, please cite our original work as follows:</p>
<pre>
@@ -89,17 +95,17 @@ You are free:</li>
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/about/legal/index.html b/site/public/about/legal/index.html
index 9eb5dd5a..ce10014a 100644
--- a/site/public/about/legal/index.html
+++ b/site/public/about/legal/index.html
@@ -90,17 +90,17 @@ To Adapt: To modify, transform and build upon the database</p>
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/about/press/index.html b/site/public/about/press/index.html
index 7b0a3e87..70caf03c 100644
--- a/site/public/about/press/index.html
+++ b/site/public/about/press/index.html
@@ -41,17 +41,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/50_people_one_question/index.html b/site/public/datasets/50_people_one_question/index.html
index dc7919f7..79411122 100644
--- a/site/public/datasets/50_people_one_question/index.html
+++ b/site/public/datasets/50_people_one_question/index.html
@@ -96,17 +96,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/afad/index.html b/site/public/datasets/afad/index.html
index f2b0a5ba..7969c1d6 100644
--- a/site/public/datasets/afad/index.html
+++ b/site/public/datasets/afad/index.html
@@ -109,17 +109,17 @@ Motivation</p>
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/brainwash/index.html b/site/public/datasets/brainwash/index.html
index 95f0d77d..becc8949 100644
--- a/site/public/datasets/brainwash/index.html
+++ b/site/public/datasets/brainwash/index.html
@@ -50,6 +50,7 @@
<div class='gray'>Website</div>
<div><a href='https://purl.stanford.edu/sx925dc9385' target='_blank' rel='nofollow noopener'>stanford.edu</a></div>
</div></div><p><em>Brainwash</em> is a head detection dataset created from San Francisco's Brainwash Cafe livecam footage. It includes 11,918 images of "everyday life of a busy downtown cafe"<a class="footnote_shim" name="[^readme]_1"> </a><a href="#[^readme]" class="footnote" title="Footnote 1">1</a> captured at 100 second intervals throught the entire day. Brainwash dataset was captured during 3 days in 2014: October 27, November 13, and November 24. According the author's reserach paper introducing the dataset, the images were acquired with the help of Angelcam.com.<a class="footnote_shim" name="[^end_to_end]_1"> </a><a href="#[^end_to_end]" class="footnote" title="Footnote 2">2</a></p>
+<p>People's Liberation Army National University of Defense Science and Technology</p>
<p>Brainwash is not a widely used dataset but since its publication by Stanford University in 2015, it has notably appeared in several research papers from the National University of Defense Technology in Changsha, China. In 2016 and in 2017 researchers there conducted studies on detecting people's heads in crowded scenes for the purpose of surveillance. <a class="footnote_shim" name="[^localized_region_context]_1"> </a><a href="#[^localized_region_context]" class="footnote" title="Footnote 3">3</a> <a class="footnote_shim" name="[^replacement_algorithm]_1"> </a><a href="#[^replacement_algorithm]" class="footnote" title="Footnote 4">4</a></p>
<p>If you happen to have been at Brainwash cafe in San Francisco at any time on October 26, November 13, or November 24 in 2014 you are most likely included in the Brainwash dataset and have unwittingly contributed to surveillance research.</p>
</section><section>
@@ -145,17 +146,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/caltech_10k/index.html b/site/public/datasets/caltech_10k/index.html
index 04d63ee3..abb55148 100644
--- a/site/public/datasets/caltech_10k/index.html
+++ b/site/public/datasets/caltech_10k/index.html
@@ -106,17 +106,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/celeba/index.html b/site/public/datasets/celeba/index.html
index c72f3798..a4a7efa2 100644
--- a/site/public/datasets/celeba/index.html
+++ b/site/public/datasets/celeba/index.html
@@ -108,17 +108,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/cofw/index.html b/site/public/datasets/cofw/index.html
index eef8cf5e..c6d7417e 100644
--- a/site/public/datasets/cofw/index.html
+++ b/site/public/datasets/cofw/index.html
@@ -161,17 +161,17 @@ To increase the number of training images, and since COFW has the exact same la
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html
index 5cb6fb0c..48c90d66 100644
--- a/site/public/datasets/duke_mtmc/index.html
+++ b/site/public/datasets/duke_mtmc/index.html
@@ -46,13 +46,167 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://vision.cs.duke.edu/DukeMTMC/' target='_blank' rel='nofollow noopener'>duke.edu</a></div>
- </div></div><p>[ page under development ]</p>
-<p>Duke MTMC (Multi-Target, Multi-Camera Tracking) is a dataset of video recorded on Duke University campus for research and development of networked camera surveillance systems. MTMC tracking algorithms are used for citywide dragnet surveillance systems such as those used throughout China by SenseTime<a class="footnote_shim" name="[^sensetime_qz]_1"> </a><a href="#[^sensetime_qz]" class="footnote" title="Footnote 1">1</a> and the oppressive monitoring of 2.5 million Uyghurs in Xinjiang by SenseNets<a class="footnote_shim" name="[^sensenets_uyghurs]_1"> </a><a href="#[^sensenets_uyghurs]" class="footnote" title="Footnote 2">2</a>. In fact researchers from both SenseTime<a class="footnote_shim" name="[^sensetime1]_1"> </a><a href="#[^sensetime1]" class="footnote" title="Footnote 4">4</a> <a class="footnote_shim" name="[^sensetime2]_1"> </a><a href="#[^sensetime2]" class="footnote" title="Footnote 5">5</a> and SenseNets<a class="footnote_shim" name="[^sensenets_sensetime]_1"> </a><a href="#[^sensenets_sensetime]" class="footnote" title="Footnote 3">3</a> used the Duke MTMC dataset for their research.</p>
-<p>In this investigation into the Duke MTMC dataset, we found that researchers at Duke University in Durham, North Carolina captured over 2,000 students, faculty members, and passersby into one of the most prolific public surveillance research datasets that's used around the world by commercial and defense surveillance organizations.</p>
-<p>Since it's publication in 2016, the Duke MTMC dataset has been used in over 100 studies at organizations around the world including SenseTime<a class="footnote_shim" name="[^sensetime1]_2"> </a><a href="#[^sensetime1]" class="footnote" title="Footnote 4">4</a> <a class="footnote_shim" name="[^sensetime2]_2"> </a><a href="#[^sensetime2]" class="footnote" title="Footnote 5">5</a>, SenseNets<a class="footnote_shim" name="[^sensenets_sensetime]_2"> </a><a href="#[^sensenets_sensetime]" class="footnote" title="Footnote 3">3</a>, IARPA and IBM<a class="footnote_shim" name="[^iarpa_ibm]_1"> </a><a href="#[^iarpa_ibm]" class="footnote" title="Footnote 9">9</a>, Chinese National University of Defense <a class="footnote_shim" name="[^cn_defense1]_1"> </a><a href="#[^cn_defense1]" class="footnote" title="Footnote 7">7</a><a class="footnote_shim" name="[^cn_defense2]_1"> </a><a href="#[^cn_defense2]" class="footnote" title="Footnote 8">8</a>, US Department of Homeland Security<a class="footnote_shim" name="[^us_dhs]_1"> </a><a href="#[^us_dhs]" class="footnote" title="Footnote 10">10</a>, Tencent, Microsoft, Microsft Asia, Fraunhofer, Senstar Corp., Alibaba, Naver Labs, Google and Hewlett-Packard Labs to name only a few.</p>
-<p>The creation and publication of the Duke MTMC dataset in 2014 (published in 2016) was originally funded by the U.S. Army Research Laboratory and the National Science Foundation<a class="footnote_shim" name="[^duke_mtmc_orig]_1"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 6">6</a>. Though our analysis of the geographic locations of the publicly available research shows over twice as many citations by researchers from China (44% China, 20% United States). In 2018 alone, there were 70 research project citations from China.</p>
-</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_reid_montage.jpg' alt=' A collection of 1,600 out of the 2,700 students and passersby captured into the Duke MTMC surveillance research and development dataset on . These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification. Open Data Commons Attribution License.'><div class='caption'> A collection of 1,600 out of the 2,700 students and passersby captured into the Duke MTMC surveillance research and development dataset on . These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification. Open Data Commons Attribution License.</div></div></section><section><p>The 8 cameras deployed on Duke's campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".<a class="footnote_shim" name="[^duke_mtmc_orig]_2"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 6">6</a>. Camera 5 was positioned to capture students as entering and exiting the university's main chapel. Each camera's location and approximate field of view. The heat map visualization shows the locations where pedestrians were most frequently annotated in each video from the Duke MTMC dataset.</p>
-</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_camera_map.jpg' alt=' Duke MTMC camera locations on Duke University campus. Open Data Commons Attribution License.'><div class='caption'> Duke MTMC camera locations on Duke University campus. Open Data Commons Attribution License.</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_cameras.jpg' alt=' Duke MTMC camera views for 8 cameras deployed on campus &copy; megapixels.cc'><div class='caption'> Duke MTMC camera views for 8 cameras deployed on campus &copy; megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliencies.jpg' alt=' Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus &copy; megapixels.cc'><div class='caption'> Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus &copy; megapixels.cc</div></div></section><section>
+ </div></div><p>Duke MTMC (Multi-Target, Multi-Camera) is a dataset of surveillance video footage taken on Duke University's campus in 2014 and is used for research and development of video tracking systems, person re-identification, and low-resolution facial recognition. The dataset contains over 14 hours of synchronized surveillance video from 8 cameras at 1080p and 60FPS with over 2 million frames of 2,000 students walking to and from classes. The 8 surveillance cameras deployed on campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy"<a class="footnote_shim" name="[^duke_mtmc_orig]_1"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 4">4</a>.</p>
+<p>In this investigation into the Duke MTMC dataset we tracked down over 100 publicly available research papers that explicitly acknowledged using Duke MTMC. Our analysis shows that the dataset has spread far beyond its origins and intentions in academic research projects at Duke University. Since its publication in 2016, more than twice as many research citations originated in China as in the United States. Among these citations were papers with explicit and direct links to the Chinese military and several of the companies known to provide Chinese authorities with the oppressive surveillance technology used to monitor millions of Uighur Muslims.</p>
+<p>In one 2018 <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_Attention-Aware_Compositional_Network_CVPR_2018_paper.pdf">paper</a> jointly published by researchers from SenseNets and SenseTime (and funded by SenseTime Group Limited) entitled <a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">Attention-Aware Compositional Network for Person Re-identification</a>, the Duke MTMC dataset was used for "extensive experiments" on improving person re-identification across multiple surveillance cameras with important applications in "finding missing elderly and children, and suspect tracking, etc." Both SenseNets and SenseTime have been directly linked to the providing surveillance technology to monitor Uighur Muslims in China. <a class="footnote_shim" name="[^sensetime_qz]_1"> </a><a href="#[^sensetime_qz]" class="footnote" title="Footnote 2">2</a><a class="footnote_shim" name="[^sensenets_uyghurs]_1"> </a><a href="#[^sensenets_uyghurs]" class="footnote" title="Footnote 3">3</a><a class="footnote_shim" name="[^xinjiang_nyt]_1"> </a><a href="#[^xinjiang_nyt]" class="footnote" title="Footnote 1">1</a></p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_reid_montage.jpg' alt=' A collection of 1,600 out of the approximately 2,000 students and pedestrians in the Duke MTMC dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification, and eventually the QMUL SurvFace face recognition dataset. Open Data Commons Attribution License.'><div class='caption'> A collection of 1,600 out of the approximately 2,000 students and pedestrians in the Duke MTMC dataset. These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification, and eventually the QMUL SurvFace face recognition dataset. Open Data Commons Attribution License.</div></div></section><section><p>Despite <a href="https://www.hrw.org/news/2017/11/19/china-police-big-data-systems-violate-privacy-target-dissent">repeated</a> <a href="https://www.hrw.org/news/2018/02/26/china-big-data-fuels-crackdown-minority-region">warnings</a> by Human Rights Watch that the authoritarian surveillance used in China represents a violation of human rights, researchers at Duke University continued to provide open access to their dataset for anyone to use for any project. As the surveillance crisis in China grew, so did the number of citations with links to organizations complicit in the crisis. In 2018 alone there were over 70 research projects happening in China that publicly acknowledged benefiting from the Duke MTMC dataset. Amongst these were projects from SenseNets, SenseTime, CloudWalk, Megvii, Beihang University, and the PLA's National University of Defense Technology.</p>
+<table>
+<thead><tr>
+<th>Organization</th>
+<th>Paper</th>
+<th>Link</th>
+<th>Year</th>
+<th>Used Duke MTMC</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>SenseNets, SenseTime</td>
+<td>Attention-Aware Compositional Network for Person Re-identification</td>
+<td><a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">SemanticScholar</a></td>
+<td>2018</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>SenseTime</td>
+<td>End-to-End Deep Kronecker-Product Matching for Person Re-identification</td>
+<td><a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Shen_End-to-End_Deep_Kronecker-Product_CVPR_2018_paper.pdf">thcvf.com</a></td>
+<td>2018</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>CloudWalk</td>
+<td>Horizontal Pyramid Matching for Person Re-identification</td>
+<td><a href="https://arxiv.org/pdf/1804.05275.pdf">arxiv.org</a></td>
+<td>20xx</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>Megvii</td>
+<td>Multi-Target, Multi-Camera Tracking by Hierarchical Clustering: Recent Progress on DukeMTMC Project</td>
+<td><a href="https://www.semanticscholar.org/paper/Multi-Target%2C-Multi-Camera-Tracking-by-Hierarchical-Zhang-Wu/10c20cf47d61063032dce4af73a4b8e350bf1128">SemanticScholar</a></td>
+<td>2018</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>Megvii</td>
+<td>Person Re-Identification (slides)</td>
+<td><a href="https://zsc.github.io/megvii-pku-dl-course/slides/Lecture%2011,%20Human%20Understanding_%20ReID%20and%20Pose%20and%20Attributes%20and%20Activity%20.pdf">github.io</a></td>
+<td>2017</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>Megvii</td>
+<td>SCPNet: Spatial-Channel Parallelism Network for Joint Holistic and Partial PersonRe-Identification</td>
+<td><a href="https://arxiv.org/abs/1810.06996">arxiv.org</a></td>
+<td>2018</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>CloudWalk</td>
+<td>CloudWalk re-identification technology extends facial biometric tracking with improved accuracy</td>
+<td><a href="https://www.biometricupdate.com/201903/cloudwalk-re-identification-technology-extends-facial-biometric-tracking-with-improved-accuracy">BiometricUpdate.com</a></td>
+<td>2018</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>CloudWalk</td>
+<td>Horizontal Pyramid Matching for Person Re-identification</td>
+<td><a href="https://arxiv.org/abs/1804.05275">arxiv.org</a>]</td>
+<td>2018</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>National University of Defense Technology</td>
+<td>Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers</td>
+<td><a href="https://www.semanticscholar.org/paper/Tracking-by-Animation%3A-Unsupervised-Learning-of-He-Liu/e90816e1a0e14ea1e7039e0b2782260999aef786">SemanticScholar.org</a></td>
+<td>2018</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>National University of Defense Technology</td>
+<td>Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based Recurrent Attention Networks</td>
+<td><a href="https://www.semanticscholar.org/paper/Unsupervised-Multi-Object-Detection-for-Video-Using-He-He/59f357015054bab43fb8cbfd3f3dbf17b1d1f881">SemanticScholar.org</a></td>
+<td>2018</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>Beihang University</td>
+<td>Orientation-Guided Similarity Learning for Person Re-identification</td>
+<td><a href="https://ieeexplore.ieee.org/document/8545620">ieee.org</a></td>
+<td>2018</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>Beihang University</td>
+<td>Online Inter-Camera Trajectory Association Exploiting Person Re-Identification and Camera Topology</td>
+<td><a href="https://dl.acm.org/citation.cfm?id=3240663">acm.org</a></td>
+<td>2018</td>
+<td>&#x2714;</td>
+</tr>
+</tbody>
+</table>
+<p>The reasons that companies in China use the Duke MTMC dataset for research are technically no different than the reasons it is used in the United States and Europe. In fact the original creators of the dataset published a follow up report in 2017 titled <a href="https://www.semanticscholar.org/paper/Tracking-Social-Groups-Within-and-Across-Cameras-Solera-Calderara/9e644b1e33dd9367be167eb9d832174004840400">Tracking Social Groups Within and Across Cameras</a> with specific applications to "automated analysis of crowds and social gatherings for surveillance and security applications". Their work, as well as the creation of the original dataset in 2014 were both supported in part by the United States Army Research Laboratory.</p>
+<p>Citations from the United States and Europe show a similar trend to that in China, including publicly acknowledged and verified usage of the Duke MTMC dataset supported or carried out by the United States Department of Homeland Security, IARPA, IBM, Microsoft (who provides surveillance to ICE), and Vision Semantics (who works with the UK Ministry of Defence). One <a href="https://pdfs.semanticscholar.org/59f3/57015054bab43fb8cbfd3f3dbf17b1d1f881.pdf">paper</a> is even jointly published by researchers affiliated with both the University College of London and the National University of Defense Technology in China.</p>
+<table>
+<thead><tr>
+<th>Organization</th>
+<th>Paper</th>
+<th>Link</th>
+<th>Year</th>
+<th>Used Duke MTMC</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>IARPA, IBM, CloudWalk</td>
+<td>Horizontal Pyramid Matching for Person Re-identification</td>
+<td><a href="https://arxiv.org/abs/1804.05275">arxiv.org</a></td>
+<td>2018</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>Microsoft</td>
+<td>ReXCam: Resource-Efficient, Cross-CameraVideo Analytics at Enterprise Scale</td>
+<td><a href="https://arxiv.org/abs/1811.01268">arxiv.org</a></td>
+<td>2018</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>Microsoft</td>
+<td>Scaling Video Analytics Systems to Large Camera Deployments</td>
+<td><a href="https://arxiv.org/pdf/1809.02318.pdf">arxiv.org</a></td>
+<td>2018</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>University College of London, National University of Defense Technology</td>
+<td>Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based RecurrentAttention Networks</td>
+<td><a href="https://pdfs.semanticscholar.org/59f3/57015054bab43fb8cbfd3f3dbf17b1d1f881.pdf">PDF</a></td>
+<td>2018</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>Vision Semantics Ltd.</td>
+<td>Unsupervised Person Re-identification by Deep Learning Tracklet Association</td>
+<td><a href="https://arxiv.org/abs/1809.02874">arxiv.org</a></td>
+<td>2018</td>
+<td>&#x2714;</td>
+</tr>
+<tr>
+<td>US Dept. of Homeland Security</td>
+<td>Re-Identification with Consistent Attentive Siamese Networks</td>
+<td><a href="https://arxiv.org/abs/1811.07487/">arxiv.org</a></td>
+<td>2019</td>
+<td>&#x2714;</td>
+</tr>
+</tbody>
+</table>
+<p>By some metrics the dataset is considered a huge success. It is regarded as highly influential research and has contributed to hundreds, if not thousands, of projects to advance artificial intelligence for person tracking and monitoring. All the above citations, regardless of which country is using it, align perfectly with the original <a href="http://vision.cs.duke.edu/DukeMTMC/">intent</a> of the Duke MTMC dataset: "to accelerate advances in multi-target multi-camera tracking".</p>
+<p>The same logic applies for all the new extensions of the Duke MTMC dataset including <a href="https://github.com/layumi/DukeMTMC-reID_evaluation">Duke MTMC Re-ID</a>, <a href="https://github.com/Yu-Wu/DukeMTMC-VideoReID">Duke MTMC Video Re-ID</a>, Duke MTMC Groups, and <a href="https://github.com/vana77/DukeMTMC-attribute">Duke MTMC Attribute</a>. And it also applies to all the new specialized datasets that will be created from Duke MTMC, such as the low-resolution face recognition dataset called <a href="https://qmul-survface.github.io/">QMUL-SurvFace</a>, which was funded in part by <a href="https://seequestor.com">SeeQuestor</a>, a computer vision provider to law enforcement agencies including Scotland Yards and Queensland Police. From the perspective of academic researchers, companies, and defense agencies using these datasets to advance their organization's work, Duke MTMC contributes value their their bottom line. Regardless of who is using these datasets or how they're used, they are simple provided to make networks of surveillance cameras more powerful.</p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliencies.jpg' alt=' Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus &copy; megapixels.cc'><div class='caption'> Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus &copy; megapixels.cc</div></div></section><section><p>But from a privacy and human rights perspective the creation and distribution of the Duke MTMC illustrates an egregious prioritization of surveillance technologies over individual rights, where the simple act of going to class could implicate your biometric data in a surveillance training dataset.</p>
+<p>For the approximately 2,000 students in Duke MTMC dataset there is unfortunately no escape. It would be impossible to remove oneself from all copies of the dataset downloaded around the world. Instead, over 2,000 students and visitors who happened to be walking to class on March 13, 2014 will forever remain in all downloaded copies of the Duke MTMC dataset and all its extensions, contributing to a global supply chain of data that powers governmental and commercial expansion of biometric surveillance technologies.</p>
+</section><section>
<h3>Who used Duke MTMC Dataset?</h3>
<p>
@@ -112,10 +266,7 @@
<h2>Supplementary Information</h2>
-</section><section><h4>Funding</h4>
-<p>Original funding for the Duke MTMC dataset was provided by the Army Research Office under Grant No. W911NF-10-1-0387 and by the National Science Foundation
-under Grants IIS-10-17017 and IIS-14-20894.</p>
-<h4>Video Timestamps</h4>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_camera_map.jpg' alt=' Duke MTMC camera locations on Duke University campus. Open Data Commons Attribution License.'><div class='caption'> Duke MTMC camera locations on Duke University campus. Open Data Commons Attribution License.</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_cameras.jpg' alt=' Duke MTMC camera views for 8 cameras deployed on campus &copy; megapixels.cc'><div class='caption'> Duke MTMC camera views for 8 cameras deployed on campus &copy; megapixels.cc</div></div></section><section><h4>Video Timestamps</h4>
<p>The video timestamps contain the likely, but not yet confirmed, date and times of capture. Because the video timestamps align with the start and stop <a href="http://vision.cs.duke.edu/DukeMTMC/details.html#time-sync">time sync data</a> provided by the researchers, it at least aligns the relative time. The <a href="https://www.wunderground.com/history/daily/KIGX/date/2014-3-19?req_city=Durham&amp;req_state=NC&amp;req_statename=North%20Carolina&amp;reqdb.zip=27708&amp;reqdb.magic=1&amp;reqdb.wmo=99999">rainy weather</a> on that day also contribute towards the likelihood of March 14, 2014..</p>
</section><section><div class='columns columns-2'><div class='column'><table>
<thead><tr>
@@ -187,13 +338,19 @@ under Grants IIS-10-17017 and IIS-14-20894.</p>
</tr>
</tbody>
</table>
-</div></div></section><section><h3>Opting Out</h3>
-<p>If you attended Duke University and were captured by any of the 8 surveillance cameras positioned on campus in 2014, there is unfortunately no way to be removed. The dataset files have been distributed throughout the world and it would not be possible to contact all the owners for removal. Nor do the authors provide any options for students to opt-out, nor did they even inform students they would be used at test subjects for surveillance research and development in a project funded, in part, by the United States Army Research Office.</p>
-<h4>Notes</h4>
+</div></div></section><section><h4>Notes</h4>
<ul>
<li>The Duke MTMC dataset paper mentions 2,700 identities, but their ground truth file only lists annotations for 1,812</li>
</ul>
-</section><section>
+<p>If you use any data from the Duke MTMC please follow their <a href="http://vision.cs.duke.edu/DukeMTMC/#how-to-cite">license</a> and cite their work as:</p>
+<pre>
+@inproceedings{ristani2016MTMC,
+ title = {Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking},
+ author = {Ristani, Ergys and Solera, Francesco and Zou, Roger and Cucchiara, Rita and Tomasi, Carlo},
+ booktitle = {European Conference on Computer Vision workshop on Benchmarking Multi-Target Tracking},
+ year = {2016}
+}
+</pre></section><section>
<h4>Cite Our Work</h4>
<p>
@@ -210,43 +367,29 @@ under Grants IIS-10-17017 and IIS-14-20894.</p>
}</pre>
</p>
-</section><section><p>If you use any data from the Duke MTMC please follow their <a href="http://vision.cs.duke.edu/DukeMTMC/#how-to-cite">license</a> and cite their work as:</p>
-<pre>
-@inproceedings{ristani2016MTMC,
- title = {Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking},
- author = {Ristani, Ergys and Solera, Francesco and Zou, Roger and Cucchiara, Rita and Tomasi, Carlo},
- booktitle = {European Conference on Computer Vision workshop on Benchmarking Multi-Target Tracking},
- year = {2016}
-}
-</pre><h4>ToDo</h4>
+</section><section><h4>ToDo</h4>
<ul>
<li>clean up citations, formatting</li>
</ul>
-</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^sensetime_qz]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime_qz]_1">a</a></span><p><a href="https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/">https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/</a></p>
+</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^xinjiang_nyt]" class="footnote_shim"></a><span class="backlinks"><a href="#[^xinjiang_nyt]_1">a</a></span><p>Mozur, Paul. "One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority". <a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html">https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html</a>. April 14, 2019.</p>
+</li><li><a name="[^sensetime_qz]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime_qz]_1">a</a></span><p><a href="https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/">https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/</a></p>
</li><li><a name="[^sensenets_uyghurs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_uyghurs]_1">a</a></span><p><a href="https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/">https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/</a></p>
-</li><li><a name="[^sensenets_sensetime]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_sensetime]_1">a</a><a href="#[^sensenets_sensetime]_2">b</a></span><p>"Attention-Aware Compositional Network for Person Re-identification". 2018. <a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">SemanticScholar</a>, <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_Attention-Aware_Compositional_Network_CVPR_2018_paper.pdf">PDF</a></p>
-</li><li><a name="[^sensetime1]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime1]_1">a</a><a href="#[^sensetime1]_2">b</a></span><p>"End-to-End Deep Kronecker-Product Matching for Person Re-identification". 2018. <a href="https://www.semanticscholar.org/paper/End-to-End-Deep-Kronecker-Product-Matching-for-Shen-Xiao/947954cafdefd471b75da8c3bb4c21b9e6d57838">SemanticScholar</a>, <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Shen_End-to-End_Deep_Kronecker-Product_CVPR_2018_paper.pdf">PDF</a></p>
-</li><li><a name="[^sensetime2]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime2]_1">a</a><a href="#[^sensetime2]_2">b</a></span><p>"Person Re-identification with Deep Similarity-Guided Graph Neural Network". 2018. <a href="https://www.semanticscholar.org/paper/Person-Re-identification-with-Deep-Graph-Neural-Shen-Li/08d2a558ea2deb117dd8066e864612bf2899905b">SemanticScholar</a></p>
-</li><li><a name="[^duke_mtmc_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^duke_mtmc_orig]_1">a</a><a href="#[^duke_mtmc_orig]_2">b</a></span><p>"Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking". 2016. <a href="https://www.semanticscholar.org/paper/Performance-Measures-and-a-Data-Set-for-Tracking-Ristani-Solera/27a2fad58dd8727e280f97036e0d2bc55ef5424c">SemanticScholar</a></p>
-</li><li><a name="[^cn_defense1]" class="footnote_shim"></a><span class="backlinks"><a href="#[^cn_defense1]_1">a</a></span><p>"Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers". 2018. <a href="https://www.semanticscholar.org/paper/Tracking-by-Animation%3A-Unsupervised-Learning-of-He-Liu/e90816e1a0e14ea1e7039e0b2782260999aef786">SemanticScholar</a></p>
-</li><li><a name="[^cn_defense2]" class="footnote_shim"></a><span class="backlinks"><a href="#[^cn_defense2]_1">a</a></span><p>"Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based Recurrent Attention Networks". 2018. <a href="https://www.semanticscholar.org/paper/Unsupervised-Multi-Object-Detection-for-Video-Using-He-He/59f357015054bab43fb8cbfd3f3dbf17b1d1f881">SemanticScholar</a></p>
-</li><li><a name="[^iarpa_ibm]" class="footnote_shim"></a><span class="backlinks"><a href="#[^iarpa_ibm]_1">a</a></span><p>"Horizontal Pyramid Matching for Person Re-identification". 2019. <a href="https://www.semanticscholar.org/paper/Horizontal-Pyramid-Matching-for-Person-Fu-Wei/c2a5f27d97744bc1f96d7e1074395749e3c59bc8">SemanticScholar</a></p>
-</li><li><a name="[^us_dhs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^us_dhs]_1">a</a></span><p>"Re-Identification with Consistent Attentive Siamese Networks". 2018. <a href="https://www.semanticscholar.org/paper/Re-Identification-with-Consistent-Attentive-Siamese-Zheng-Karanam/24d6d3adf2176516ef0de2e943ce2084e27c4f94">SemanticScholar</a></p>
+</li><li><a name="[^duke_mtmc_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^duke_mtmc_orig]_1">a</a></span><p>"Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking". 2016. <a href="https://www.semanticscholar.org/paper/Performance-Measures-and-a-Data-Set-for-Tracking-Ristani-Solera/27a2fad58dd8727e280f97036e0d2bc55ef5424c">SemanticScholar</a></p>
</li></ul></section></section>
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/feret/index.html b/site/public/datasets/feret/index.html
index 387826b0..7f9ed94c 100644
--- a/site/public/datasets/feret/index.html
+++ b/site/public/datasets/feret/index.html
@@ -119,17 +119,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/hrt_transgender/index.html b/site/public/datasets/hrt_transgender/index.html
index 6b9ae7be..4e566a4a 100644
--- a/site/public/datasets/hrt_transgender/index.html
+++ b/site/public/datasets/hrt_transgender/index.html
@@ -49,17 +49,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/index.html b/site/public/datasets/index.html
index 75961089..5c8e2546 100644
--- a/site/public/datasets/index.html
+++ b/site/public/datasets/index.html
@@ -28,7 +28,7 @@
<section><h1>Facial Recognition Datasets</h1>
-<p>Explore publicly available facial recognition datasets. More datasets will be added throughout 2019.</p>
+<p>Explore publicly available facial recognition datasets feeding into research and development of biometric surveillance technologies at the largest technology companies and defense contractors in the world.</p>
</section>
<section class='applet_container autosize'><div class='applet' data-payload='{"command":"dataset_list"}'></div></section>
@@ -115,17 +115,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/lfpw/index.html b/site/public/datasets/lfpw/index.html
index 45de2599..a9eb025d 100644
--- a/site/public/datasets/lfpw/index.html
+++ b/site/public/datasets/lfpw/index.html
@@ -98,17 +98,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/lfw/index.html b/site/public/datasets/lfw/index.html
index 7997629f..ff7a3cd9 100644
--- a/site/public/datasets/lfw/index.html
+++ b/site/public/datasets/lfw/index.html
@@ -148,17 +148,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/market_1501/index.html b/site/public/datasets/market_1501/index.html
index 7c545335..05750dc7 100644
--- a/site/public/datasets/market_1501/index.html
+++ b/site/public/datasets/market_1501/index.html
@@ -114,17 +114,17 @@ organization={Springer}
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/msceleb/index.html b/site/public/datasets/msceleb/index.html
index 8b070118..84c62bd2 100644
--- a/site/public/datasets/msceleb/index.html
+++ b/site/public/datasets/msceleb/index.html
@@ -123,17 +123,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/oxford_town_centre/index.html b/site/public/datasets/oxford_town_centre/index.html
index b48efe3e..2c7c26fc 100644
--- a/site/public/datasets/oxford_town_centre/index.html
+++ b/site/public/datasets/oxford_town_centre/index.html
@@ -138,17 +138,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/pipa/index.html b/site/public/datasets/pipa/index.html
index 6c920b46..ae8aef6d 100644
--- a/site/public/datasets/pipa/index.html
+++ b/site/public/datasets/pipa/index.html
@@ -102,17 +102,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/pubfig/index.html b/site/public/datasets/pubfig/index.html
index e81e12bc..ef289954 100644
--- a/site/public/datasets/pubfig/index.html
+++ b/site/public/datasets/pubfig/index.html
@@ -99,17 +99,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/uccs/index.html b/site/public/datasets/uccs/index.html
index 32f7cdb2..9347d536 100644
--- a/site/public/datasets/uccs/index.html
+++ b/site/public/datasets/uccs/index.html
@@ -51,12 +51,11 @@
<div><a href='http://vast.uccs.edu/Opensetface/' target='_blank' rel='nofollow noopener'>uccs.edu</a></div>
</div></div><p>UnConstrained College Students (UCCS) is a dataset of long-range surveillance photos captured at University of Colorado Colorado Springs developed primarily for research and development of "face detection and recognition research towards surveillance applications"<a class="footnote_shim" name="[^uccs_vast]_1"> </a><a href="#[^uccs_vast]" class="footnote" title="Footnote 1">1</a>. According to the authors of two papers associated with the dataset, over 1,700 students and pedestrians were "photographed using a long-range high-resolution surveillance camera without their knowledge".<a class="footnote_shim" name="[^funding_uccs]_1"> </a><a href="#[^funding_uccs]" class="footnote" title="Footnote 3">3</a> In this investigation, we examine the contents of the dataset, funding sources, photo EXIF data, and information from publicly available research project citations.</p>
<p>The UCCS dataset includes over 1,700 unique identities, most of which are students walking to and from class. As of 2018, it was the "largest surveillance [face recognition] benchmark in the public domain."<a class="footnote_shim" name="[^surv_face_qmul]_1"> </a><a href="#[^surv_face_qmul]" class="footnote" title="Footnote 4">4</a> The photos were taken during the spring semesters of 2012 &ndash; 2013 on the West Lawn of the University of Colorado Colorado Springs campus. The photographs were timed to capture students during breaks between their scheduled classes in the morning and afternoon during Monday through Thursday. "For example, a student taking Monday-Wednesday classes at 12:30 PM will show up in the camera on almost every Monday and Wednesday."<a class="footnote_shim" name="[^sapkota_boult]_1"> </a><a href="#[^sapkota_boult]" class="footnote" title="Footnote 2">2</a>.</p>
-</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_grid.jpg' alt=' Example images from the UnConstrained College Students Dataset. '><div class='caption'> Example images from the UnConstrained College Students Dataset. </div></div></section><section><p>The long-range surveillance images in the UnContsrained College Students dataset were captured using a Canon 7D 18 megapixel digital camera fitted with a Sigma 800mm F5.6 EX APO DG HSM telephoto lens and pointed out an office window across the university's West Lawn. The students were photographed from a distance of approximately 150 meters through an office window. "The camera [was] programmed to start capturing images at specific time intervals between classes to maximize the number of faces being captured."<a class="footnote_shim" name="[^sapkota_boult]_2"> </a><a href="#[^sapkota_boult]" class="footnote" title="Footnote 2">2</a>
-Their setup made it impossible for students to know they were being photographed, providing the researchers with realistic surveillance images to help build face detection and recognition systems for real world applications in defense, intelligence, and commercial applications.</p>
-</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_map_aerial.jpg' alt=' The location at University of Colorado Colorado Springs where students were surreptitiously photographed with a long-range surveillance camera for use in a defense and intelligence agency funded research project on face recognition. Image: Google Maps'><div class='caption'> The location at University of Colorado Colorado Springs where students were surreptitiously photographed with a long-range surveillance camera for use in a defense and intelligence agency funded research project on face recognition. Image: Google Maps</div></div></section><section><p>In the two papers associated with the release of the UCCS dataset (<a href="https://www.semanticscholar.org/paper/Unconstrained-Face-Detection-and-Open-Set-Face-G%C3%BCnther-Hu/d4f1eb008eb80595bcfdac368e23ae9754e1e745">Unconstrained Face Detection and Open-Set Face Recognition Challenge</a> and <a href="https://www.semanticscholar.org/paper/Large-scale-unconstrained-open-set-face-database-Sapkota-Boult/07fcbae86f7a3ad3ea1cf95178459ee9eaf77cb1">Large Scale Unconstrained Open Set Face Database</a>), the researchers disclosed their funding sources as ODNI (United States Office of Director of National Intelligence), IARPA (Intelligence Advance Research Projects Activity), ONR MURI (Office of Naval Research and The Department of Defense Multidisciplinary University Research Initiative), Army SBIR (Small Business Innovation Research), SOCOM SBIR (Special Operations Command and Small Business Innovation Research), and the National Science Foundation. Further, UCCS's VAST site explicity <a href="https://vast.uccs.edu/project/iarpa-janus/">states</a> they are part of the <a href="https://www.iarpa.gov/index.php/research-programs/janus">IARPA Janus</a>, a face recognition project developed to serve the needs of national intelligence interests.</p>
-<p>The EXIF data embedded in the images shows that the photo capture times follow a similar pattern, but also highlights that the vast majority of photos (over 7,000) were taken on Tuesdays around noon during students' lunch break. The lack of any photos taken on Friday shows that the researchers were only interested in capturing images of students.</p>
-</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_exif_plot_days.png' alt=' UCCS photos captured per weekday &copy; megapixels.cc'><div class='caption'> UCCS photos captured per weekday &copy; megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_exif_plot.png' alt=' UCCS photos captured per weekday &copy; megapixels.cc'><div class='caption'> UCCS photos captured per weekday &copy; megapixels.cc</div></div></section><section><p>The two research papers associated with the release of the UCCS dataset (<a href="https://www.semanticscholar.org/paper/Unconstrained-Face-Detection-and-Open-Set-Face-G%C3%BCnther-Hu/d4f1eb008eb80595bcfdac368e23ae9754e1e745">Unconstrained Face Detection and Open-Set Face Recognition Challenge</a> and <a href="https://www.semanticscholar.org/paper/Large-scale-unconstrained-open-set-face-database-Sapkota-Boult/07fcbae86f7a3ad3ea1cf95178459ee9eaf77cb1">Large Scale Unconstrained Open Set Face Database</a>), acknowledge that the primary funding sources for their work were United States defense and intelligence agencies. Specifically, development of the UnContrianed College Students dataset was funded by the Intelligence Advanced Research Projects Activity (IARPA), Office of Director of National Intelligence (ODNI), Office of Naval Research and The Department of Defense Multidisciplinary University Research Initiative (ONR MURI), Small Business Innovation Research (SBIR), Special Operations Command and Small Business Innovation Research (SOCOM SBIR), and the National Science Foundation. Further, UCCS's VAST site explicitly <a href="https://vast.uccs.edu/project/iarpa-janus/">states</a> they are part of the <a href="https://www.iarpa.gov/index.php/research-programs/janus">IARPA Janus</a>, a face recognition project developed to serve the needs of national intelligence interests, clearly establishing the the funding sources and immediate benefactors of this dataset are United States defense and intelligence agencies.</p>
-<p>Although the images were first captured in 2012 &ndash; 2013 the dataset was not publicly released until 2016. Then in 2017 the UCCS face dataset formed the basis for a defense and intelligence agency funded <a href="http://www.face-recognition-challenge.com/">face recognition challenge</a> project at the International Joint Biometrics Conference in Denver, CO. And in 2018 the dataset was again used for the <a href="https://erodner.github.io/ial2018eccv/">2nd Unconstrained Face Detection and Open Set Recognition Challenge</a> at the European Computer Vision Conference (ECCV) in Munich, Germany.</p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_grid.jpg' alt=' Example images from the UnConstrained College Students Dataset. '><div class='caption'> Example images from the UnConstrained College Students Dataset. </div></div></section><section><p>The long-range surveillance images in the UnContsrained College Students dataset were taken using a Canon 7D 18-megapixel digital camera fitted with a Sigma 800mm F5.6 EX APO DG HSM telephoto lens and pointed out an office window across the university's West Lawn. The students were photographed from a distance of approximately 150 meters through an office window. "The camera [was] programmed to start capturing images at specific time intervals between classes to maximize the number of faces being captured."<a class="footnote_shim" name="[^sapkota_boult]_2"> </a><a href="#[^sapkota_boult]" class="footnote" title="Footnote 2">2</a>
+Their setup made it impossible for students to know they were being photographed, providing the researchers with realistic surveillance images to help build face recognition systems for real world applications in defense, intelligence, and commercial sectors.</p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_map_aerial.jpg' alt=' The location at University of Colorado Colorado Springs where students were surreptitiously photographed with a long-range surveillance camera for use in a defense and intelligence agency funded research project on face recognition. Image: Google Maps'><div class='caption'> The location at University of Colorado Colorado Springs where students were surreptitiously photographed with a long-range surveillance camera for use in a defense and intelligence agency funded research project on face recognition. Image: Google Maps</div></div></section><section><p>The EXIF data embedded in the images shows that the photo capture times follow a similar pattern, but also highlights that the vast majority of photos (over 7,000) were taken on Tuesdays around noon during students' lunch break. The lack of any photos taken on Friday shows that the researchers were only interested in capturing images of students.</p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_exif_plot_days.png' alt=' UCCS photos captured per weekday &copy; megapixels.cc'><div class='caption'> UCCS photos captured per weekday &copy; megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_exif_plot.png' alt=' UCCS photos captured per weekday &copy; megapixels.cc'><div class='caption'> UCCS photos captured per weekday &copy; megapixels.cc</div></div></section><section><p>The two research papers associated with the release of the UCCS dataset (<a href="https://www.semanticscholar.org/paper/Unconstrained-Face-Detection-and-Open-Set-Face-G%C3%BCnther-Hu/d4f1eb008eb80595bcfdac368e23ae9754e1e745">Unconstrained Face Detection and Open-Set Face Recognition Challenge</a> and <a href="https://www.semanticscholar.org/paper/Large-scale-unconstrained-open-set-face-database-Sapkota-Boult/07fcbae86f7a3ad3ea1cf95178459ee9eaf77cb1">Large Scale Unconstrained Open Set Face Database</a>), acknowledge that the primary funding sources for their work were United States defense and intelligence agencies. Specifically, development of the UnContsrianed College Students dataset was funded by the Intelligence Advanced Research Projects Activity (IARPA), Office of Director of National Intelligence (ODNI), Office of Naval Research and The Department of Defense Multidisciplinary University Research Initiative (ONR MURI), and the Special Operations Command and Small Business Innovation Research (SOCOM SBIR) amongst others. UCCS's VAST site also explicitly <a href="https://vast.uccs.edu/project/iarpa-janus/">states</a> that they are part of the <a href="https://www.iarpa.gov/index.php/research-programs/janus">IARPA Janus</a>, a face recognition project developed to serve the needs of national intelligence interests, clearly establishing the the funding sources and immediate benefactors of this dataset are United States defense and intelligence agencies.</p>
+<p>Although the images were first captured in 2012 &ndash; 2013 the dataset was not publicly released until 2016. In 2017 the UCCS face dataset formed the basis for a defense and intelligence agency funded <a href="http://www.face-recognition-challenge.com/">face recognition challenge</a> project at the International Joint Biometrics Conference in Denver, CO. And in 2018 the dataset was again used for the <a href="https://erodner.github.io/ial2018eccv/">2nd Unconstrained Face Detection and Open Set Recognition Challenge</a> at the European Computer Vision Conference (ECCV) in Munich, Germany.</p>
<p>As of April 15, 2019, the UCCS dataset is no longer available for public download. But during the three years it was publicly available (2016-2019) the UCCS dataset appeared in at least 6 publicly available research papers including verified usage from Beihang University who is known to provide research and development for China's military.</p>
</section><section>
<h3>Who used UCCS?</h3>
@@ -259,17 +258,17 @@ Their setup made it impossible for students to know they were being photographed
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/vgg_face2/index.html b/site/public/datasets/vgg_face2/index.html
index a9d318f1..24ce4b2d 100644
--- a/site/public/datasets/vgg_face2/index.html
+++ b/site/public/datasets/vgg_face2/index.html
@@ -124,17 +124,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/viper/index.html b/site/public/datasets/viper/index.html
index bc4ddd3d..e4b2a05a 100644
--- a/site/public/datasets/viper/index.html
+++ b/site/public/datasets/viper/index.html
@@ -104,17 +104,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/datasets/youtube_celebrities/index.html b/site/public/datasets/youtube_celebrities/index.html
index 69b3a02e..e90b45cb 100644
--- a/site/public/datasets/youtube_celebrities/index.html
+++ b/site/public/datasets/youtube_celebrities/index.html
@@ -95,17 +95,17 @@ the views of our sponsors.</li>
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/info/index.html b/site/public/info/index.html
index 749c29ba..7e7ecf80 100644
--- a/site/public/info/index.html
+++ b/site/public/info/index.html
@@ -32,17 +32,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/research/00_introduction/index.html b/site/public/research/00_introduction/index.html
index 353e3270..535958cc 100644
--- a/site/public/research/00_introduction/index.html
+++ b/site/public/research/00_introduction/index.html
@@ -83,17 +83,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/research/01_from_1_to_100_pixels/index.html b/site/public/research/01_from_1_to_100_pixels/index.html
index 9426ef0f..fe49e998 100644
--- a/site/public/research/01_from_1_to_100_pixels/index.html
+++ b/site/public/research/01_from_1_to_100_pixels/index.html
@@ -121,17 +121,17 @@ relying on FaceID and TouchID to protect their information agree to a</p>
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/research/02_what_computers_can_see/index.html b/site/public/research/02_what_computers_can_see/index.html
index 920f78cc..d139e83e 100644
--- a/site/public/research/02_what_computers_can_see/index.html
+++ b/site/public/research/02_what_computers_can_see/index.html
@@ -292,17 +292,17 @@ Head top</p>
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/research/index.html b/site/public/research/index.html
index 1be8203f..0386fa99 100644
--- a/site/public/research/index.html
+++ b/site/public/research/index.html
@@ -31,17 +31,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/test/chart/index.html b/site/public/test/chart/index.html
index e882ecc5..05081cf5 100644
--- a/site/public/test/chart/index.html
+++ b/site/public/test/chart/index.html
@@ -32,17 +32,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/test/citations/index.html b/site/public/test/citations/index.html
index a8af41df..36021752 100644
--- a/site/public/test/citations/index.html
+++ b/site/public/test/citations/index.html
@@ -32,17 +32,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/test/csv/index.html b/site/public/test/csv/index.html
index 2c2242b4..301ed718 100644
--- a/site/public/test/csv/index.html
+++ b/site/public/test/csv/index.html
@@ -32,17 +32,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/test/datasets/index.html b/site/public/test/datasets/index.html
index bf08418f..58555895 100644
--- a/site/public/test/datasets/index.html
+++ b/site/public/test/datasets/index.html
@@ -32,17 +32,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/test/face_search/index.html b/site/public/test/face_search/index.html
index 75bb907b..e2db70df 100644
--- a/site/public/test/face_search/index.html
+++ b/site/public/test/face_search/index.html
@@ -32,17 +32,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/test/gallery/index.html b/site/public/test/gallery/index.html
index 8958f369..869c3aaa 100644
--- a/site/public/test/gallery/index.html
+++ b/site/public/test/gallery/index.html
@@ -50,17 +50,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/test/index.html b/site/public/test/index.html
index e660bb2d..9c15d431 100644
--- a/site/public/test/index.html
+++ b/site/public/test/index.html
@@ -43,17 +43,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/test/map/index.html b/site/public/test/map/index.html
index 21229ec1..ba2756ae 100644
--- a/site/public/test/map/index.html
+++ b/site/public/test/map/index.html
@@ -32,17 +32,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/test/name_search/index.html b/site/public/test/name_search/index.html
index b0bdb86f..c956ff0b 100644
--- a/site/public/test/name_search/index.html
+++ b/site/public/test/name_search/index.html
@@ -32,17 +32,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>
diff --git a/site/public/test/pie_chart/index.html b/site/public/test/pie_chart/index.html
index 98a89ff4..2e3ba39c 100644
--- a/site/public/test/pie_chart/index.html
+++ b/site/public/test/pie_chart/index.html
@@ -32,17 +32,17 @@
</div>
<footer>
- <div>
- <a href="/">MegaPixels.cc</a>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- <a href="/about/press/">Press</a>
- <a href="/about/legal/">Legal and Privacy</a>
- </div>
- <div>
- MegaPixels &copy;2017-19 Adam R. Harvey /&nbsp;
- <a href="https://ahprojects.com">ahprojects.com</a>
- </div>
+ <ul class="footer-left">
+ <li><a href="/">MegaPixels.cc</a></li>
+ <li><a href="/datasets/">Datasets</a></li>
+ <li><a href="/about/">About</a></li>
+ <li><a href="/about/press/">Press</a></li>
+ <li><a href="/about/legal/">Legal and Privacy</a></li>
+ </ul>
+ <ul class="footer-right">
+ <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
+ <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
+ </ul>
</footer>
</body>