summaryrefslogtreecommitdiff
path: root/site/public/datasets
diff options
context:
space:
mode:
Diffstat (limited to 'site/public/datasets')
-rw-r--r--site/public/datasets/50_people_one_question/index.html115
-rw-r--r--site/public/datasets/afad/index.html128
-rw-r--r--site/public/datasets/brainwash/ijb_c/index.html152
-rw-r--r--site/public/datasets/brainwash/index.html6
-rw-r--r--site/public/datasets/caltech_10k/index.html125
-rw-r--r--site/public/datasets/celeba/index.html127
-rw-r--r--site/public/datasets/cofw/index.html180
-rw-r--r--site/public/datasets/duke_mtmc/index.html6
-rw-r--r--site/public/datasets/feret/index.html138
-rw-r--r--site/public/datasets/hrt_transgender/index.html4
-rw-r--r--site/public/datasets/ijb_c/index.html6
-rw-r--r--site/public/datasets/index.html4
-rw-r--r--site/public/datasets/lfpw/index.html117
-rw-r--r--site/public/datasets/lfw/index.html167
-rw-r--r--site/public/datasets/market_1501/index.html133
-rw-r--r--site/public/datasets/msceleb/assets/notes/index.html4
-rw-r--r--site/public/datasets/msceleb/index.html6
-rw-r--r--site/public/datasets/oxford_town_centre/index.html6
-rw-r--r--site/public/datasets/pipa/index.html121
-rw-r--r--site/public/datasets/pubfig/index.html118
-rw-r--r--site/public/datasets/uccs/assets/notes/index.html4
-rw-r--r--site/public/datasets/uccs/index.html6
-rw-r--r--site/public/datasets/vgg_face2/index.html143
-rw-r--r--site/public/datasets/viper/index.html123
-rw-r--r--site/public/datasets/youtube_celebrities/index.html114
25 files changed, 46 insertions, 2007 deletions
diff --git a/site/public/datasets/50_people_one_question/index.html b/site/public/datasets/50_people_one_question/index.html
deleted file mode 100644
index bc879799..00000000
--- a/site/public/datasets/50_people_one_question/index.html
+++ /dev/null
@@ -1,115 +0,0 @@
-<!doctype html>
-<html>
-<head>
- <title>MegaPixels</title>
- <meta charset="utf-8" />
- <meta name="author" content="Adam Harvey" />
- <meta name="description" content="People One Question is a dataset of people from an online video series on YouTube and Vimeo used for building facial recogntion algorithms" />
- <meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
- <link rel='stylesheet' href='/assets/css/fonts.css' />
- <link rel='stylesheet' href='/assets/css/css.css' />
- <link rel='stylesheet' href='/assets/css/leaflet.css' />
- <link rel='stylesheet' href='/assets/css/applets.css' />
- <link rel='stylesheet' href='/assets/css/mobile .css' />
-</head>
-<body>
- <header>
- <a class='slogan' href="/">
- <div class='logo'></div>
- <div class='site_name'>MegaPixels</div>
- <div class='page_name'>50 People One Question Dataset</div>
- </a>
- <div class='links'>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- </div>
- </header>
- <div class="content content-dataset">
-
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/50_people_one_question/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span style="color:#ffaa00">People One Question</span> is a dataset of people from an online video series on YouTube and Vimeo used for building facial recogntion algorithms</span></div><div class='hero_subdesc'><span class='bgpad'>People One Question dataset includes ...
-</span></div></div></section><section><h2>50 People 1 Question</h2>
-</section><section><div class='right-sidebar'><div class='meta'>
- <div class='gray'>Published</div>
- <div>2013</div>
- </div><div class='meta'>
- <div class='gray'>Videos</div>
- <div>33 </div>
- </div><div class='meta'>
- <div class='gray'>Purpose</div>
- <div>Facial landmark estimation</div>
- </div><div class='meta'>
- <div class='gray'>Website</div>
- <div><a href='http://www.vision.caltech.edu/~dhall/projects/MergingPoseEstimates/' target='_blank' rel='nofollow noopener'>caltech.edu</a></div>
- </div></div><p>[ page under development ]</p>
-</section><section>
- <h3>Who used 50 People One Question Dataset?</h3>
-
- <p>
- This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
- </p>
-
- </section>
-
-<section class="applet_container">
-<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
-</div> -->
- <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
-</section>
-
-<section class="applet_container">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
-</section>
-
-<section>
-
- <h3>Biometric Trade Routes</h3>
-
- <p>
- To help understand how 50 People One Question Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing 50 People One Question was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
- </p>
-
- </section>
-
-<section class="applet_container fullwidth">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
-</section>
-
-<div class="caption">
- <ul class="map-legend">
- <li class="edu">Academic</li>
- <li class="com">Commercial</li>
- <li class="gov">Military / Government</li>
- </ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
-</div>
-
-
-<section class="applet_container">
-
- <h3>Dataset Citations</h3>
- <p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
- </p>
-
- <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
-</section>
-
- </div>
- <footer>
- <ul class="footer-left">
- <li><a href="/">MegaPixels.cc</a></li>
- <li><a href="/datasets/">Datasets</a></li>
- <li><a href="/about/">About</a></li>
- <li><a href="/about/press/">Press</a></li>
- <li><a href="/about/legal/">Legal and Privacy</a></li>
- </ul>
- <ul class="footer-right">
- <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
- <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
- </ul>
- </footer>
-</body>
-
-<script src="/assets/js/dist/index.js"></script>
-</html> \ No newline at end of file
diff --git a/site/public/datasets/afad/index.html b/site/public/datasets/afad/index.html
deleted file mode 100644
index f5a04251..00000000
--- a/site/public/datasets/afad/index.html
+++ /dev/null
@@ -1,128 +0,0 @@
-<!doctype html>
-<html>
-<head>
- <title>MegaPixels</title>
- <meta charset="utf-8" />
- <meta name="author" content="Adam Harvey" />
- <meta name="description" content="AFAD: Asian Face Age Dataset" />
- <meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
- <link rel='stylesheet' href='/assets/css/fonts.css' />
- <link rel='stylesheet' href='/assets/css/css.css' />
- <link rel='stylesheet' href='/assets/css/leaflet.css' />
- <link rel='stylesheet' href='/assets/css/applets.css' />
- <link rel='stylesheet' href='/assets/css/mobile .css' />
-</head>
-<body>
- <header>
- <a class='slogan' href="/">
- <div class='logo'></div>
- <div class='site_name'>MegaPixels</div>
- <div class='page_name'>Asian Face Age Dataset</div>
- </a>
- <div class='links'>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- </div>
- </header>
- <div class="content content-">
-
- <section><h2>Asian Face Age Dataset</h2>
-</section><section><div class='right-sidebar'><div class='meta'>
- <div class='gray'>Published</div>
- <div>2017</div>
- </div><div class='meta'>
- <div class='gray'>Images</div>
- <div>164,432 </div>
- </div><div class='meta'>
- <div class='gray'>Purpose</div>
- <div>age estimation on Asian Faces</div>
- </div><div class='meta'>
- <div class='gray'>Funded by</div>
- <div>NSFC, the Fundamental Research Funds for the Central Universities, the Program for Changjiang Scholars and Innovative Research Team in University of China, the Shaanxi Innovative Research Team for Key Science and Technology, and China Postdoctoral Science Foundation</div>
- </div><div class='meta'>
- <div class='gray'>Website</div>
- <div><a href='https://afad-dataset.github.io/' target='_blank' rel='nofollow noopener'>github.io</a></div>
- </div></div><p>[ page under development ]</p>
-</section><section>
- <h3>Who used Asian Face Age Dataset?</h3>
-
- <p>
- This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
- </p>
-
- </section>
-
-<section class="applet_container">
-<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
-</div> -->
- <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
-</section>
-
-<section class="applet_container">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
-</section>
-
-<section>
-
- <h3>Biometric Trade Routes</h3>
-
- <p>
- To help understand how Asian Face Age Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing The Asian Face Age Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
- </p>
-
- </section>
-
-<section class="applet_container fullwidth">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
-</section>
-
-<div class="caption">
- <ul class="map-legend">
- <li class="edu">Academic</li>
- <li class="com">Commercial</li>
- <li class="gov">Military / Government</li>
- </ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
-</div>
-
-
-<section class="applet_container">
-
- <h3>Dataset Citations</h3>
- <p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
- </p>
-
- <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
-</section><section><h2>(ignore) research notes</h2>
-<blockquote><p>The Asian Face Age Dataset (AFAD) is a new dataset proposed for evaluating the performance of age estimation, which contains more than 160K facial images and the corresponding age and gender labels. This dataset is oriented to age estimation on Asian faces, so all the facial images are for Asian faces. It is noted that the AFAD is the biggest dataset for age estimation to date. It is well suited to evaluate how deep learning methods can be adopted for age estimation.
-Motivation</p>
-<p>For age estimation, there are several public datasets for evaluating the performance of a specific algorithm, such as FG-NET [1] (1002 face images), MORPH I (1690 face images), and MORPH II[2] (55,608 face images). Among them, the MORPH II is the biggest public dataset to date. On the other hand, as we know it is necessary to collect a large scale dataset to train a deep Convolutional Neural Network. Therefore, the MORPH II dataset is extensively used to evaluate how deep learning methods can be adopted for age estimation [3][4].</p>
-<p>However, the ethnic is very unbalanced for the MORPH II dataset, i.e., it has only less than 1% Asian faces. In order to evaluate the previous methods for age estimation on Asian Faces, the Asian Face Age Dataset (AFAD) was proposed.</p>
-<p>There are 164,432 well-labeled photos in the AFAD dataset. It consist of 63,680 photos for female as well as 100,752 photos for male, and the ages range from 15 to 40. The distribution of photo counts for distinct ages are illustrated in the figure above. Some samples are shown in the Figure on the top. Its download link is provided in the "Download" section.</p>
-<p>In addition, we also provide a subset of the AFAD dataset, called AFAD-Lite, which only contains PLACEHOLDER well-labeled photos. It consist of PLACEHOLDER photos for female as well as PLACEHOLDER photos for male, and the ages range from 15 to 40. The distribution of photo counts for distinct ages are illustrated in Fig. PLACEHOLDER. Its download link is also provided in the "Download" section.</p>
-<p>The AFAD dataset is built by collecting selfie photos on a particular social network -- RenRen Social Network (RSN) [5]. The RSN is widely used by Asian students including middle school, high school, undergraduate, and graduate students. Even after leaving from school, some people still access their RSN account to connect with their old classmates. So, the age of the RSN user crosses a wide range from 15-years to more than 40-years old.</p>
-<p>Please notice that this dataset is made available for academic research purpose only.</p>
-</blockquote>
-<p><a href="https://afad-dataset.github.io/">https://afad-dataset.github.io/</a></p>
-</section>
-
- </div>
- <footer>
- <ul class="footer-left">
- <li><a href="/">MegaPixels.cc</a></li>
- <li><a href="/datasets/">Datasets</a></li>
- <li><a href="/about/">About</a></li>
- <li><a href="/about/press/">Press</a></li>
- <li><a href="/about/legal/">Legal and Privacy</a></li>
- </ul>
- <ul class="footer-right">
- <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
- <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
- </ul>
- </footer>
-</body>
-
-<script src="/assets/js/dist/index.js"></script>
-</html> \ No newline at end of file
diff --git a/site/public/datasets/brainwash/ijb_c/index.html b/site/public/datasets/brainwash/ijb_c/index.html
deleted file mode 100644
index f57d180b..00000000
--- a/site/public/datasets/brainwash/ijb_c/index.html
+++ /dev/null
@@ -1,152 +0,0 @@
-<!doctype html>
-<html>
-<head>
- <title>MegaPixels</title>
- <meta charset="utf-8" />
- <meta name="author" content="Adam Harvey" />
- <meta name="description" content="IJB-C is a datset ..." />
- <meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
- <link rel='stylesheet' href='/assets/css/fonts.css' />
- <link rel='stylesheet' href='/assets/css/css.css' />
- <link rel='stylesheet' href='/assets/css/leaflet.css' />
- <link rel='stylesheet' href='/assets/css/applets.css' />
-</head>
-<body>
- <header>
- <a class='slogan' href="/">
- <div class='logo'></div>
- <div class='site_name'>MegaPixels</div>
- <div class='splash'>IJB-C</div>
- </a>
- <div class='links'>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- </div>
- </header>
- <div class="content content-dataset">
-
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>IJB-C is a datset ...</span></div><div class='hero_subdesc'><span class='bgpad'>The IJB-C dataset contains...
-</span></div></div></section><section><h2>Brainwash Dataset</h2>
-</section><section><div class='right-sidebar'><div class='meta'>
- <div class='gray'>Published</div>
- <div>2017</div>
- </div><div class='meta'>
- <div class='gray'>Images</div>
- <div>21,294 </div>
- </div><div class='meta'>
- <div class='gray'>Videos</div>
- <div>11,779 </div>
- </div><div class='meta'>
- <div class='gray'>Identities</div>
- <div>3,531 </div>
- </div><div class='meta'>
- <div class='gray'>Purpose</div>
- <div>face recognition challenge by NIST in full motion videos</div>
- </div><div class='meta'>
- <div class='gray'>Website</div>
- <div><a href='https://www.nist.gov/programs-projects/face-challenges' target='_blank' rel='nofollow noopener'>nist.gov</a></div>
- </div></div><p>Brainwash is a dataset of livecam images taken from San Francisco's Brainwash Cafe. It includes 11,918 images of "everyday life of a busy downtown cafe"<a class="footnote_shim" name="[^readme]_1"> </a><a href="#[^readme]" class="footnote" title="Footnote 1">1</a> captured at 100 second intervals throught the entire day. The Brainwash dataset includes 3 full days of webcam images taken on October 27, November 13, and November 24 in 2014. According the author's <a href="https://www.semanticscholar.org/paper/End-to-End-People-Detection-in-Crowded-Scenes-Stewart-Andriluka/1bd1645a629f1b612960ab9bba276afd4cf7c666">reserach paper</a> introducing the dataset, the images were acquired with the help of Angelcam.com<a class="footnote_shim" name="[^end_to_end]_1"> </a><a href="#[^end_to_end]" class="footnote" title="Footnote 2">2</a></p>
-<p>The Brainwash dataset is unique because it uses images from a publicly available webcam that records people inside a privately owned business without any consent. No ordinary cafe custom could ever suspect there image would end up in dataset used for surveillance reserach and development, but that is exactly what happened to customers at Brainwash cafe in San Francisco.</p>
-<p>Although Brainwash appears to be a less popular dataset, it was used in 2016 and 2017 by researchers from the National University of Defense Technology in China took note of the dataset and used it for two <a href="https://www.semanticscholar.org/paper/Localized-region-context-and-object-feature-fusion-Li-Dou/b02d31c640b0a31fb18c4f170d841d8e21ffb66c">research</a> <a href="https://www.semanticscholar.org/paper/A-Replacement-Algorithm-of-Non-Maximum-Suppression-Zhao-Wang/591a4bfa6380c9fcd5f3ae690e3ac5c09b7bf37b">projects</a> on advancing the capabilities of object detection to more accurately isolate the target region in an image (<a href="https://www.itm-conferences.org/articles/itmconf/pdf/2017/04/itmconf_ita2017_05006.pdf">PDF</a>). <a class="footnote_shim" name="[^localized_region_context]_1"> </a><a href="#[^localized_region_context]" class="footnote" title="Footnote 3">3</a> <a class="footnote_shim" name="[^replacement_algorithm]_1"> </a><a href="#[^replacement_algorithm]" class="footnote" title="Footnote 4">4</a>. The dataset also appears in a 2017 <a href="https://ieeexplore.ieee.org/document/7877809">research paper</a> from Peking University for the purpose of improving surveillance capabilities for "people detection in the crowded scenes".</p>
-</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_grid.jpg' alt=' A visualization of 81,973 head annotations from the Brainwash dataset training partition. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> A visualization of 81,973 head annotations from the Brainwash dataset training partition. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section>
- <h3>Who used IJB-C?</h3>
-
- <p>
- This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
- </p>
-
- </section>
-
-<section class="applet_container">
-<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
-</div> -->
- <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
-</section>
-
-<section class="applet_container">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
-</section>
-
-<section>
-
- <h3>Biometric Trade Routes</h3>
-
- <p>
- To help understand how IJB-C has been used around the world by commercial, military, and academic organizations; existing publicly available research citing IARPA Janus Benchmark C was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
- </p>
-
- </section>
-
-<section class="applet_container fullwidth">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
-</section>
-
-<div class="caption">
- <ul class="map-legend">
- <li class="edu">Academic</li>
- <li class="com">Commercial</li>
- <li class="gov">Military / Government</li>
- </ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
-</div>
-
-
-<section class="applet_container">
-
- <h3>Dataset Citations</h3>
- <p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
- </p>
-
- <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
-</section><section>
-
- <div class="hr-wave-holder">
- <div class="hr-wave-line hr-wave-line1"></div>
- <div class="hr-wave-line hr-wave-line2"></div>
- </div>
-
- <h2>Supplementary Information</h2>
-
-</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_example.jpg' alt=' An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains 11,916 more images like this one. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains 11,916 more images like this one. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_saliency_map.jpg' alt=' A visualization of the active regions for 81,973 head annotations from the Brainwash dataset training partition. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> A visualization of the active regions for 81,973 head annotations from the Brainwash dataset training partition. Credit: megapixels.cc. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section>
-
- <h4>Cite Our Work</h4>
- <p>
-
- If you use our data, research, or graphics please cite our work:
-
-<pre id="cite-bibtex">
-@online{megapixels,
- author = {Harvey, Adam. LaPlace, Jules.},
- title = {MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets},
- year = 2019,
- url = {https://megapixels.cc/},
- urldate = {2019-04-18}
-}</pre>
-
- </p>
-</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^readme]" class="footnote_shim"></a><span class="backlinks"><a href="#[^readme]_1">a</a></span><p>"readme.txt" <a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">https://exhibits.stanford.edu/data/catalog/sx925dc9385</a>.</p>
-</li><li><a name="[^end_to_end]" class="footnote_shim"></a><span class="backlinks"><a href="#[^end_to_end]_1">a</a></span><p>Stewart, Russel. Andriluka, Mykhaylo. "End-to-end people detection in crowded scenes". 2016.</p>
-</li><li><a name="[^localized_region_context]" class="footnote_shim"></a><span class="backlinks"><a href="#[^localized_region_context]_1">a</a></span><p>Li, Y. and Dou, Y. and Liu, X. and Li, T. Localized Region Context and Object Feature Fusion for People Head Detection. ICIP16 Proceedings. 2016. Pages 594-598.</p>
-</li><li><a name="[^replacement_algorithm]" class="footnote_shim"></a><span class="backlinks"><a href="#[^replacement_algorithm]_1">a</a></span><p>Zhao. X, Wang Y, Dou, Y. A Replacement Algorithm of Non-Maximum Suppression Base on Graph Clustering.</p>
-</li></ul></section></section>
-
- </div>
- <footer>
- <ul class="footer-left">
- <li><a href="/">MegaPixels.cc</a></li>
- <li><a href="/datasets/">Datasets</a></li>
- <li><a href="/about/">About</a></li>
- <li><a href="/about/press/">Press</a></li>
- <li><a href="/about/legal/">Legal and Privacy</a></li>
- </ul>
- <ul class="footer-right">
- <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
- <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
- </ul>
- </footer>
-</body>
-
-<script src="/assets/js/dist/index.js"></script>
-</html> \ No newline at end of file
diff --git a/site/public/datasets/brainwash/index.html b/site/public/datasets/brainwash/index.html
index 7d9232ce..e1717179 100644
--- a/site/public/datasets/brainwash/index.html
+++ b/site/public/datasets/brainwash/index.html
@@ -49,7 +49,11 @@
<div class='links'>
<a href="/datasets/">Datasets</a>
<a href="/about/">About</a>
+<<<<<<< HEAD
<a href="/about/news">News</a>
+=======
+ <a href="/about/updates/">Updates</a>
+>>>>>>> 76c058b87f94fb1ed7b37869a8082c25c7ab37de
</div>
</header>
<div class="content content-dataset">
@@ -104,7 +108,7 @@
<section>
- <h3>Informaton Supply chain</h3>
+ <h3>Information Supply chain</h3>
<p>
To help understand how Brainwash Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Brainwash Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
diff --git a/site/public/datasets/caltech_10k/index.html b/site/public/datasets/caltech_10k/index.html
deleted file mode 100644
index 5848b804..00000000
--- a/site/public/datasets/caltech_10k/index.html
+++ /dev/null
@@ -1,125 +0,0 @@
-<!doctype html>
-<html>
-<head>
- <title>MegaPixels</title>
- <meta charset="utf-8" />
- <meta name="author" content="Adam Harvey" />
- <meta name="description" content="Caltech 10K Faces Dataset" />
- <meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
- <link rel='stylesheet' href='/assets/css/fonts.css' />
- <link rel='stylesheet' href='/assets/css/css.css' />
- <link rel='stylesheet' href='/assets/css/leaflet.css' />
- <link rel='stylesheet' href='/assets/css/applets.css' />
- <link rel='stylesheet' href='/assets/css/mobile .css' />
-</head>
-<body>
- <header>
- <a class='slogan' href="/">
- <div class='logo'></div>
- <div class='site_name'>MegaPixels</div>
- <div class='page_name'>Brainwash Dataset</div>
- </a>
- <div class='links'>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- </div>
- </header>
- <div class="content content-">
-
- <section><h2>Caltech 10K Faces Dataset</h2>
-</section><section><div class='right-sidebar'><div class='meta'>
- <div class='gray'>Published</div>
- <div>2015</div>
- </div><div class='meta'>
- <div class='gray'>Images</div>
- <div>11,917 </div>
- </div><div class='meta'>
- <div class='gray'>Purpose</div>
- <div>Head detection</div>
- </div><div class='meta'>
- <div class='gray'>Created by</div>
- <div>Stanford University (US), Max Planck Institute for Informatics (DE)</div>
- </div><div class='meta'>
- <div class='gray'>Funded by</div>
- <div>Max Planck Center for Visual Computing and Communication</div>
- </div><div class='meta'>
- <div class='gray'>Download Size</div>
- <div>4.1 GB</div>
- </div><div class='meta'>
- <div class='gray'>Website</div>
- <div><a href='https://purl.stanford.edu/sx925dc9385' target='_blank' rel='nofollow noopener'>stanford.edu</a></div>
- </div></div><p>[ page under development ]</p>
-</section><section>
- <h3>Who used Brainwash Dataset?</h3>
-
- <p>
- This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
- </p>
-
- </section>
-
-<section class="applet_container">
-<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
-</div> -->
- <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
-</section>
-
-<section class="applet_container">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
-</section>
-
-<section>
-
- <h3>Biometric Trade Routes</h3>
-
- <p>
- To help understand how Brainwash Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Brainwash Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
- </p>
-
- </section>
-
-<section class="applet_container fullwidth">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
-</section>
-
-<div class="caption">
- <ul class="map-legend">
- <li class="edu">Academic</li>
- <li class="com">Commercial</li>
- <li class="gov">Military / Government</li>
- </ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
-</div>
-
-
-<section class="applet_container">
-
- <h3>Dataset Citations</h3>
- <p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
- </p>
-
- <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
-</section><section><h3>(ignore) research notes</h3>
-<p>The dataset contains images of people collected from the web by typing common given names into Google Image Search. The coordinates of the eyes, the nose and the center of the mouth for each frontal face are provided in a ground truth file. This information can be used to align and crop the human faces or as a ground truth for a face detection algorithm. The dataset has 10,524 human faces of various resolutions and in different settings, e.g. portrait images, groups of people, etc. Profile faces or very low resolution faces are not labeled.</p>
-</section>
-
- </div>
- <footer>
- <ul class="footer-left">
- <li><a href="/">MegaPixels.cc</a></li>
- <li><a href="/datasets/">Datasets</a></li>
- <li><a href="/about/">About</a></li>
- <li><a href="/about/press/">Press</a></li>
- <li><a href="/about/legal/">Legal and Privacy</a></li>
- </ul>
- <ul class="footer-right">
- <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
- <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
- </ul>
- </footer>
-</body>
-
-<script src="/assets/js/dist/index.js"></script>
-</html> \ No newline at end of file
diff --git a/site/public/datasets/celeba/index.html b/site/public/datasets/celeba/index.html
deleted file mode 100644
index 92c0e334..00000000
--- a/site/public/datasets/celeba/index.html
+++ /dev/null
@@ -1,127 +0,0 @@
-<!doctype html>
-<html>
-<head>
- <title>MegaPixels</title>
- <meta charset="utf-8" />
- <meta name="author" content="Adam Harvey" />
- <meta name="description" content="CelebA is a dataset of people..." />
- <meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
- <link rel='stylesheet' href='/assets/css/fonts.css' />
- <link rel='stylesheet' href='/assets/css/css.css' />
- <link rel='stylesheet' href='/assets/css/leaflet.css' />
- <link rel='stylesheet' href='/assets/css/applets.css' />
- <link rel='stylesheet' href='/assets/css/mobile .css' />
-</head>
-<body>
- <header>
- <a class='slogan' href="/">
- <div class='logo'></div>
- <div class='site_name'>MegaPixels</div>
- <div class='page_name'>CelebA Dataset</div>
- </a>
- <div class='links'>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- </div>
- </header>
- <div class="content content-dataset">
-
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/celeba/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span style="color:#ffaa00">CelebA</span> is a dataset of people...</span></div><div class='hero_subdesc'><span class='bgpad'>CelebA includes...
-</span></div></div></section><section><h2>CelebA Dataset</h2>
-</section><section><div class='right-sidebar'><div class='meta'>
- <div class='gray'>Published</div>
- <div>2015</div>
- </div><div class='meta'>
- <div class='gray'>Images</div>
- <div>202,599 </div>
- </div><div class='meta'>
- <div class='gray'>Identities</div>
- <div>10,177 </div>
- </div><div class='meta'>
- <div class='gray'>Purpose</div>
- <div>face attribute recognition, face detection, and landmark (or facial part) localization</div>
- </div><div class='meta'>
- <div class='gray'>Download Size</div>
- <div>1.4 GB</div>
- </div><div class='meta'>
- <div class='gray'>Website</div>
- <div><a href='http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html' target='_blank' rel='nofollow noopener'>edu.hk</a></div>
- </div></div><p>[ PAGE UNDER DEVELOPMENT ]</p>
-</section><section>
- <h3>Who used CelebA Dataset?</h3>
-
- <p>
- This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
- </p>
-
- </section>
-
-<section class="applet_container">
-<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
-</div> -->
- <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
-</section>
-
-<section class="applet_container">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
-</section>
-
-<section>
-
- <h3>Biometric Trade Routes</h3>
-
- <p>
- To help understand how CelebA Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Large-scale CelebFaces Attributes Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
- </p>
-
- </section>
-
-<section class="applet_container fullwidth">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
-</section>
-
-<div class="caption">
- <ul class="map-legend">
- <li class="edu">Academic</li>
- <li class="com">Commercial</li>
- <li class="gov">Military / Government</li>
- </ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
-</div>
-
-
-<section class="applet_container">
-
- <h3>Dataset Citations</h3>
- <p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
- </p>
-
- <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
-</section><section><h3>Research</h3>
-<ul>
-<li>"An Unsupervised Approach to Solving Inverse Problems using Generative Adversarial Networks" mentions use by sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their"</li>
-<li>7dab6fbf42f82f0f5730fc902f72c3fb628ef2f0</li>
-<li>principal responsibility is ensuring the safety, security and reliability of the nation's nuclear weapons NNSA ( National Nuclear Security Administration )</li>
-</ul>
-</section>
-
- </div>
- <footer>
- <ul class="footer-left">
- <li><a href="/">MegaPixels.cc</a></li>
- <li><a href="/datasets/">Datasets</a></li>
- <li><a href="/about/">About</a></li>
- <li><a href="/about/press/">Press</a></li>
- <li><a href="/about/legal/">Legal and Privacy</a></li>
- </ul>
- <ul class="footer-right">
- <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
- <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
- </ul>
- </footer>
-</body>
-
-<script src="/assets/js/dist/index.js"></script>
-</html> \ No newline at end of file
diff --git a/site/public/datasets/cofw/index.html b/site/public/datasets/cofw/index.html
deleted file mode 100644
index fd6d86ae..00000000
--- a/site/public/datasets/cofw/index.html
+++ /dev/null
@@ -1,180 +0,0 @@
-<!doctype html>
-<html>
-<head>
- <title>MegaPixels</title>
- <meta charset="utf-8" />
- <meta name="author" content="Adam Harvey" />
- <meta name="description" content="COFW: Caltech Occluded Faces in The Wild" />
- <meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
- <link rel='stylesheet' href='/assets/css/fonts.css' />
- <link rel='stylesheet' href='/assets/css/css.css' />
- <link rel='stylesheet' href='/assets/css/leaflet.css' />
- <link rel='stylesheet' href='/assets/css/applets.css' />
- <link rel='stylesheet' href='/assets/css/mobile .css' />
-</head>
-<body>
- <header>
- <a class='slogan' href="/">
- <div class='logo'></div>
- <div class='site_name'>MegaPixels</div>
- <div class='page_name'>COFW Dataset</div>
- </a>
- <div class='links'>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- </div>
- </header>
- <div class="content content-">
-
- <section><h2>Caltech Occluded Faces in the Wild</h2>
-</section><section><div class='right-sidebar'><div class='meta'>
- <div class='gray'>Published</div>
- <div>2013</div>
- </div><div class='meta'>
- <div class='gray'>Images</div>
- <div>1,007 </div>
- </div><div class='meta'>
- <div class='gray'>Purpose</div>
- <div>challenging dataset (sunglasses, hats, interaction with objects)</div>
- </div><div class='meta'>
- <div class='gray'>Website</div>
- <div><a href='http://www.vision.caltech.edu/xpburgos/ICCV13/' target='_blank' rel='nofollow noopener'>caltech.edu</a></div>
- </div></div><p>[ PAGE UNDER DEVELOPMENT ]</p>
-</section><section>
- <h3>Who used COFW Dataset?</h3>
-
- <p>
- This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
- </p>
-
- </section>
-
-<section class="applet_container">
-<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
-</div> -->
- <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
-</section>
-
-<section class="applet_container">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
-</section>
-
-<section>
-
- <h3>Biometric Trade Routes</h3>
-
- <p>
- To help understand how COFW Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Caltech Occluded Faces in the Wild was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
- </p>
-
- </section>
-
-<section class="applet_container fullwidth">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
-</section>
-
-<div class="caption">
- <ul class="map-legend">
- <li class="edu">Academic</li>
- <li class="com">Commercial</li>
- <li class="gov">Military / Government</li>
- </ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
-</div>
-
-
-<section class="applet_container">
-
- <h3>Dataset Citations</h3>
- <p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
- </p>
-
- <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
-</section><section><h3>(ignore) research notes</h3>
-</section><section><div class='meta'><div><div class='gray'>Years</div><div>1993-1996</div></div><div><div class='gray'>Images</div><div>14,126</div></div><div><div class='gray'>Identities</div><div>1,199 </div></div><div><div class='gray'>Origin</div><div>Web Searches</div></div><div><div class='gray'>Funded by</div><div>ODNI, IARPA, Microsoft</div></div></div><section><section><p>COFW is "is designed to benchmark face landmark algorithms in realistic conditions, which include heavy occlusions and large shape variations" [Robust face landmark estimation under occlusion].</p>
-<blockquote><p>We asked four people with different levels of computer vision knowledge to each collect 250 faces representative of typical real-world images, with the clear goal of challenging computer vision methods.
-The result is 1,007 images of faces obtained from a variety of sources.</p>
-</blockquote>
-<p>Robust face landmark estimation under occlusion</p>
-<blockquote><p>Our face dataset is designed to present faces in real-world conditions. Faces show large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food, hands, microphones, etc.). All images were hand annotated in our lab using the same 29 landmarks as in LFPW. We annotated both the landmark positions as well as their occluded/unoccluded state. The faces are occluded to different degrees, with large variations in the type of occlusions encountered. COFW has an average occlusion of over 23%.
-To increase the number of training images, and since COFW has the exact same landmarks as LFPW, for training we use the original non-augmented 845 LFPW faces + 500 COFW faces (1345 total), and for testing the remaining 507 COFW faces. To make sure all images had occlusion labels, we annotated occlusion on the available 845 LFPW training images, finding an average of only 2% occlusion.</p>
-</blockquote>
-<p><a href="http://www.vision.caltech.edu/xpburgos/ICCV13/">http://www.vision.caltech.edu/xpburgos/ICCV13/</a></p>
-<blockquote><p>This research is supported by NSF Grant 0954083 and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R&amp;D Contract No. 2014-14071600012.</p>
-</blockquote>
-<p><a href="https://www.cs.cmu.edu/~peiyunh/topdown/">https://www.cs.cmu.edu/~peiyunh/topdown/</a></p>
-</section><section>
-
- <h3>Biometric Trade Routes</h3>
-
- <p>
- To help understand how COFW Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Caltech Occluded Faces in the Wild was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the location markers to reveal research projects at that location.
- </p>
-
- </section>
-
-<section class="applet_container fullwidth">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
-</section>
-
-<div class="caption">
- <ul class="map-legend">
- <li class="edu">Academic</li>
- <li class="com">Commercial</li>
- <li class="gov">Military / Government</li>
- </ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> and then dataset usage verified and geolocated.</div >
-</div><section>
-
- <div class="hr-wave-holder">
- <div class="hr-wave-line hr-wave-line1"></div>
- <div class="hr-wave-line hr-wave-line2"></div>
- </div>
-
- <h2>Supplementary Information</h2>
-
-</section><section class="applet_container">
-
- <h3>Dataset Citations</h3>
- <p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
- </p>
-
- <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
-</section><section>
- <h3>Who used COFW Dataset?</h3>
-
- <p>
- This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
- </p>
-
- </section>
-
-<section class="applet_container">
-<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
-</div> -->
- <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
-</section><section><p>TODO</p>
-<h2>- replace graphic</h2>
-</section>
-
- </div>
- <footer>
- <ul class="footer-left">
- <li><a href="/">MegaPixels.cc</a></li>
- <li><a href="/datasets/">Datasets</a></li>
- <li><a href="/about/">About</a></li>
- <li><a href="/about/press/">Press</a></li>
- <li><a href="/about/legal/">Legal and Privacy</a></li>
- </ul>
- <ul class="footer-right">
- <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
- <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
- </ul>
- </footer>
-</body>
-
-<script src="/assets/js/dist/index.js"></script>
-</html> \ No newline at end of file
diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html
index 86dc60af..c12f1522 100644
--- a/site/public/datasets/duke_mtmc/index.html
+++ b/site/public/datasets/duke_mtmc/index.html
@@ -49,7 +49,11 @@
<div class='links'>
<a href="/datasets/">Datasets</a>
<a href="/about/">About</a>
+<<<<<<< HEAD
<a href="/about/news">News</a>
+=======
+ <a href="/about/updates/">Updates</a>
+>>>>>>> 76c058b87f94fb1ed7b37869a8082c25c7ab37de
</div>
</header>
<div class="content content-dataset">
@@ -267,7 +271,7 @@
<section>
- <h3>Informaton Supply chain</h3>
+ <h3>Information Supply chain</h3>
<p>
To help understand how Duke MTMC Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Duke Multi-Target, Multi-Camera Tracking Project was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
diff --git a/site/public/datasets/feret/index.html b/site/public/datasets/feret/index.html
deleted file mode 100644
index 88b025ae..00000000
--- a/site/public/datasets/feret/index.html
+++ /dev/null
@@ -1,138 +0,0 @@
-<!doctype html>
-<html>
-<head>
- <title>MegaPixels</title>
- <meta charset="utf-8" />
- <meta name="author" content="Adam Harvey" />
- <meta name="description" content="LFW: Labeled Faces in The Wild" />
- <meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
- <link rel='stylesheet' href='/assets/css/fonts.css' />
- <link rel='stylesheet' href='/assets/css/css.css' />
- <link rel='stylesheet' href='/assets/css/leaflet.css' />
- <link rel='stylesheet' href='/assets/css/applets.css' />
- <link rel='stylesheet' href='/assets/css/mobile .css' />
-</head>
-<body>
- <header>
- <a class='slogan' href="/">
- <div class='logo'></div>
- <div class='site_name'>MegaPixels</div>
- <div class='page_name'>LFW</div>
- </a>
- <div class='links'>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- </div>
- </header>
- <div class="content content-">
-
- <section><h1>FacE REcognition Dataset (FERET)</h1>
-</section><section><div class='right-sidebar'><div class='meta'>
- <div class='gray'>Published</div>
- <div>2007</div>
- </div><div class='meta'>
- <div class='gray'>Images</div>
- <div>13,233 </div>
- </div><div class='meta'>
- <div class='gray'>Identities</div>
- <div>5,749 </div>
- </div><div class='meta'>
- <div class='gray'>Purpose</div>
- <div>face recognition</div>
- </div><div class='meta'>
- <div class='gray'>Website</div>
- <div><a href='http://vis-www.cs.umass.edu/lfw/' target='_blank' rel='nofollow noopener'>umass.edu</a></div>
- </div></div><p>[ page under development ]</p>
-</section><section>
- <h3>Who used LFW?</h3>
-
- <p>
- This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
- </p>
-
- </section>
-
-<section class="applet_container">
-<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
-</div> -->
- <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
-</section>
-
-<section class="applet_container">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
-</section>
-
-<section>
-
- <h3>Biometric Trade Routes</h3>
-
- <p>
- To help understand how LFW has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Labeled Faces in the Wild was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
- </p>
-
- </section>
-
-<section class="applet_container fullwidth">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
-</section>
-
-<div class="caption">
- <ul class="map-legend">
- <li class="edu">Academic</li>
- <li class="com">Commercial</li>
- <li class="gov">Military / Government</li>
- </ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
-</div>
-
-
-<section class="applet_container">
-
- <h3>Dataset Citations</h3>
- <p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
- </p>
-
- <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
-</section><section><h3>(ignore) RESEARCH below this line</h3>
-<ul>
-<li>Years: 1993-1996</li>
-<li>Images: 14,126</li>
-<li>Identities: 1,199 </li>
-<li>Origin: Fairfax, MD</li>
-<li><em>Facial Recognition Evaluation</em> (FERET) is develop, test, and evaluate face recognition algorithms</li>
-<li>The goal of the FERET program was to develop automatic face recognition capabilities that could be employed to assist security, intelligence, and law enforcement personnel in the performance of their duties.</li>
-<li><a href="https://www.nist.gov/programs-projects/face-recognition-technology-feret">https://www.nist.gov/programs-projects/face-recognition-technology-feret</a></li>
-</ul>
-<h3>"The FERET database and evaluation procedure for face-recognition algorithms"</h3>
-<ul>
-<li>Images were captured using Kodak Ultra film</li>
-<li>The facial images were collected in 11 sessions from August 1993 to December 1994. Conducted at George Mason University and at US Army Research Laboratory facilities, </li>
-</ul>
-<h3>FERET (Face Recognition Technology) Recognition Algorithm Development and Test Results</h3>
-<ul>
-<li>"A release form is necessary because of the privacy laws in the United States."</li>
-</ul>
-<h2>Funding</h2>
-<p>The FERET program is sponsored by the U.S. Depart- ment of Defense’s Counterdrug Technology Development Program Office. The U.S. Army Research Laboratory (ARL) is the technical agent for the FERET program. ARL designed, administered, and scored the FERET tests. George Mason University collected, processed, and main- tained the FERET database. Inquiries regarding the FERET database or test should be directed to P. Jonathon Phillips.</p>
-</section>
-
- </div>
- <footer>
- <ul class="footer-left">
- <li><a href="/">MegaPixels.cc</a></li>
- <li><a href="/datasets/">Datasets</a></li>
- <li><a href="/about/">About</a></li>
- <li><a href="/about/press/">Press</a></li>
- <li><a href="/about/legal/">Legal and Privacy</a></li>
- </ul>
- <ul class="footer-right">
- <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
- <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
- </ul>
- </footer>
-</body>
-
-<script src="/assets/js/dist/index.js"></script>
-</html> \ No newline at end of file
diff --git a/site/public/datasets/hrt_transgender/index.html b/site/public/datasets/hrt_transgender/index.html
index 4f046aa7..1859e830 100644
--- a/site/public/datasets/hrt_transgender/index.html
+++ b/site/public/datasets/hrt_transgender/index.html
@@ -49,7 +49,11 @@
<div class='links'>
<a href="/datasets/">Datasets</a>
<a href="/about/">About</a>
+<<<<<<< HEAD
<a href="/about/news">News</a>
+=======
+ <a href="/about/updates/">Updates</a>
+>>>>>>> 76c058b87f94fb1ed7b37869a8082c25c7ab37de
</div>
</header>
<div class="content content-dataset">
diff --git a/site/public/datasets/ijb_c/index.html b/site/public/datasets/ijb_c/index.html
index 5dbad086..511420b9 100644
--- a/site/public/datasets/ijb_c/index.html
+++ b/site/public/datasets/ijb_c/index.html
@@ -49,7 +49,11 @@
<div class='links'>
<a href="/datasets/">Datasets</a>
<a href="/about/">About</a>
+<<<<<<< HEAD
<a href="/about/news">News</a>
+=======
+ <a href="/about/updates/">Updates</a>
+>>>>>>> 76c058b87f94fb1ed7b37869a8082c25c7ab37de
</div>
</header>
<div class="content content-dataset">
@@ -156,7 +160,7 @@
<section>
- <h3>Informaton Supply chain</h3>
+ <h3>Information Supply chain</h3>
<p>
To help understand how IJB-C has been used around the world by commercial, military, and academic organizations; existing publicly available research citing IARPA Janus Benchmark C was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
diff --git a/site/public/datasets/index.html b/site/public/datasets/index.html
index a5c9c14a..4c50ceb5 100644
--- a/site/public/datasets/index.html
+++ b/site/public/datasets/index.html
@@ -49,7 +49,11 @@
<div class='links'>
<a href="/datasets/">Datasets</a>
<a href="/about/">About</a>
+<<<<<<< HEAD
<a href="/about/news">News</a>
+=======
+ <a href="/about/updates/">Updates</a>
+>>>>>>> 76c058b87f94fb1ed7b37869a8082c25c7ab37de
</div>
</header>
<div class="content content-">
diff --git a/site/public/datasets/lfpw/index.html b/site/public/datasets/lfpw/index.html
deleted file mode 100644
index 68c3e033..00000000
--- a/site/public/datasets/lfpw/index.html
+++ /dev/null
@@ -1,117 +0,0 @@
-<!doctype html>
-<html>
-<head>
- <title>MegaPixels</title>
- <meta charset="utf-8" />
- <meta name="author" content="Adam Harvey" />
- <meta name="description" content="LFPW: Labeled Face Parts in The Wild" />
- <meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
- <link rel='stylesheet' href='/assets/css/fonts.css' />
- <link rel='stylesheet' href='/assets/css/css.css' />
- <link rel='stylesheet' href='/assets/css/leaflet.css' />
- <link rel='stylesheet' href='/assets/css/applets.css' />
- <link rel='stylesheet' href='/assets/css/mobile .css' />
-</head>
-<body>
- <header>
- <a class='slogan' href="/">
- <div class='logo'></div>
- <div class='site_name'>MegaPixels</div>
- <div class='page_name'>LFWP</div>
- </a>
- <div class='links'>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- </div>
- </header>
- <div class="content content-">
-
- <section><h2>Labeled Face Parts in The Wild</h2>
-</section><section><div class='right-sidebar'><div class='meta'>
- <div class='gray'>Published</div>
- <div>2011</div>
- </div><div class='meta'>
- <div class='gray'>Funded by</div>
- <div>CIA</div>
- </div><div class='meta'>
- <div class='gray'>Website</div>
- <div><a href='http://neerajkumar.org/databases/lfpw/' target='_blank' rel='nofollow noopener'>neerajkumar.org</a></div>
- </div></div></section><section>
- <h3>Who used LFWP?</h3>
-
- <p>
- This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
- </p>
-
- </section>
-
-<section class="applet_container">
-<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
-</div> -->
- <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
-</section>
-
-<section class="applet_container">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
-</section>
-
-<section>
-
- <h3>Biometric Trade Routes</h3>
-
- <p>
- To help understand how LFWP has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Labeled Face Parts in the Wild was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
- </p>
-
- </section>
-
-<section class="applet_container fullwidth">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
-</section>
-
-<div class="caption">
- <ul class="map-legend">
- <li class="edu">Academic</li>
- <li class="com">Commercial</li>
- <li class="gov">Military / Government</li>
- </ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
-</div>
-
-
-<section class="applet_container">
-
- <h3>Dataset Citations</h3>
- <p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
- </p>
-
- <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
-</section><section><p>RESEARCH below this line</p>
-<blockquote><p>Release 1 of LFPW consists of 1,432 faces from images downloaded from the web using simple text queries on sites such as google.com, flickr.com, and yahoo.com. Each image was labeled by three MTurk workers, and 29 fiducial points, shown below, are included in dataset. LFPW was originally described in the following publication:</p>
-<p>Due to copyright issues, we cannot distribute image files in any format to anyone. Instead, we have made available a list of image URLs where you can download the images yourself. We realize that this makes it impossible to exactly compare numbers, as image links will slowly disappear over time, but we have no other option. This seems to be the way other large web-based databases seem to be evolving.</p>
-</blockquote>
-<p><a href="https://neerajkumar.org/databases/lfpw/">https://neerajkumar.org/databases/lfpw/</a></p>
-<blockquote><p>This research was performed at Kriegman-Belhumeur Vision Technologies and was funded by the CIA through the Office of the Chief Scientist. <a href="https://www.cs.cmu.edu/~peiyunh/topdown/">https://www.cs.cmu.edu/~peiyunh/topdown/</a> (nk_cvpr2011_faceparts.pdf)</p>
-</blockquote>
-</section>
-
- </div>
- <footer>
- <ul class="footer-left">
- <li><a href="/">MegaPixels.cc</a></li>
- <li><a href="/datasets/">Datasets</a></li>
- <li><a href="/about/">About</a></li>
- <li><a href="/about/press/">Press</a></li>
- <li><a href="/about/legal/">Legal and Privacy</a></li>
- </ul>
- <ul class="footer-right">
- <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
- <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
- </ul>
- </footer>
-</body>
-
-<script src="/assets/js/dist/index.js"></script>
-</html> \ No newline at end of file
diff --git a/site/public/datasets/lfw/index.html b/site/public/datasets/lfw/index.html
deleted file mode 100644
index 7ae440a8..00000000
--- a/site/public/datasets/lfw/index.html
+++ /dev/null
@@ -1,167 +0,0 @@
-<!doctype html>
-<html>
-<head>
- <title>MegaPixels</title>
- <meta charset="utf-8" />
- <meta name="author" content="Adam Harvey" />
- <meta name="description" content="Labeled Faces in The Wild (LFW) is the first facial recognition dataset created entirely from online photos" />
- <meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
- <link rel='stylesheet' href='/assets/css/fonts.css' />
- <link rel='stylesheet' href='/assets/css/css.css' />
- <link rel='stylesheet' href='/assets/css/leaflet.css' />
- <link rel='stylesheet' href='/assets/css/applets.css' />
- <link rel='stylesheet' href='/assets/css/mobile .css' />
-</head>
-<body>
- <header>
- <a class='slogan' href="/">
- <div class='logo'></div>
- <div class='site_name'>MegaPixels</div>
- <div class='page_name'>LFW</div>
- </a>
- <div class='links'>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- </div>
- </header>
- <div class="content content-">
-
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/lfw/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Labeled Faces in The Wild (LFW)</span> is the first facial recognition dataset created entirely from online photos</span></div><div class='hero_subdesc'><span class='bgpad'>It includes 13,456 images of 4,432 people's images copied from the Internet during 2002-2004 and is the most frequently used dataset in the world for benchmarking face recognition algorithms.
-</span></div></div></section><section><h2>Labeled Faces in the Wild</h2>
-</section><section><div class='right-sidebar'><div class='meta'>
- <div class='gray'>Published</div>
- <div>2007</div>
- </div><div class='meta'>
- <div class='gray'>Images</div>
- <div>13,233 </div>
- </div><div class='meta'>
- <div class='gray'>Identities</div>
- <div>5,749 </div>
- </div><div class='meta'>
- <div class='gray'>Purpose</div>
- <div>face recognition</div>
- </div><div class='meta'>
- <div class='gray'>Website</div>
- <div><a href='http://vis-www.cs.umass.edu/lfw/' target='_blank' rel='nofollow noopener'>umass.edu</a></div>
- </div></div><p>[ PAGE UNDER DEVELOPMENT ]</p>
-<p><em>Labeled Faces in The Wild</em> (LFW) is "a database of face photographs designed for studying the problem of unconstrained face recognition<a class="footnote_shim" name="[^lfw_www]_1"> </a><a href="#[^lfw_www]" class="footnote" title="Footnote 1">1</a>. It is used to evaluate and improve the performance of facial recognition algorithms in academic, commercial, and government research. According to BiometricUpdate.com<a class="footnote_shim" name="[^lfw_pingan]_1"> </a><a href="#[^lfw_pingan]" class="footnote" title="Footnote 3">3</a>, LFW is "the most widely used evaluation set in the field of facial recognition, LFW attracts a few dozen teams from around the globe including Google, Facebook, Microsoft Research Asia, Baidu, Tencent, SenseTime, Face++ and Chinese University of Hong Kong."</p>
-<p>The LFW dataset includes 13,233 images of 5,749 people that were collected between 2002-2004. LFW is a subset of <em>Names of Faces</em> and is part of the first facial recognition training dataset created entirely from images appearing on the Internet. The people appearing in LFW are...</p>
-<p>The <em>Names and Faces</em> dataset was the first face recognition dataset created entire from online photos. However, <em>Names and Faces</em> and <em>LFW</em> are not the first face recognition dataset created entirely "in the wild". That title belongs to the <a href="/datasets/ucd_faces/">UCD dataset</a>. Images obtained "in the wild" means using an image without explicit consent or awareness from the subject or photographer.</p>
-<p>The <em>Names and Faces</em> dataset was the first face recognition dataset created entire from online photos. However, <em>Names and Faces</em> and <em>LFW</em> are not the first face recognition dataset created entirely "in the wild". That title belongs to the <a href="/datasets/ucd_faces/">UCD dataset</a>. Images obtained "in the wild" means using an image without explicit consent or awareness from the subject or photographer.</p>
-</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/lfw/assets/lfw_montage_all_crop.jpg' alt='All 5,379 people in the Labeled Faces in The Wild Dataset. Showing one face per person'><div class='caption'>All 5,379 people in the Labeled Faces in The Wild Dataset. Showing one face per person</div></div></section><section><p>The <em>Names and Faces</em> dataset was the first face recognition dataset created entire from online photos. However, <em>Names and Faces</em> and <em>LFW</em> are not the first face recognition dataset created entirely "in the wild". That title belongs to the <a href="/datasets/ucd_faces/">UCD dataset</a>. Images obtained "in the wild" means using an image without explicit consent or awareness from the subject or photographer.</p>
-<p>The <em>Names and Faces</em> dataset was the first face recognition dataset created entire from online photos. However, <em>Names and Faces</em> and <em>LFW</em> are not the first face recognition dataset created entirely "in the wild". That title belongs to the <a href="/datasets/ucd_faces/">UCD dataset</a>. Images obtained "in the wild" means using an image without explicit consent or awareness from the subject or photographer.</p>
-</section><section>
- <h3>Who used LFW?</h3>
-
- <p>
- This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
- </p>
-
- </section>
-
-<section class="applet_container">
-<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
-</div> -->
- <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
-</section>
-
-<section class="applet_container">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
-</section>
-
-<section>
-
- <h3>Biometric Trade Routes</h3>
-
- <p>
- To help understand how LFW has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Labeled Faces in the Wild was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
- </p>
-
- </section>
-
-<section class="applet_container fullwidth">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
-</section>
-
-<div class="caption">
- <ul class="map-legend">
- <li class="edu">Academic</li>
- <li class="com">Commercial</li>
- <li class="gov">Military / Government</li>
- </ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
-</div>
-
-
-<section class="applet_container">
-
- <h3>Dataset Citations</h3>
- <p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
- </p>
-
- <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
-</section><section>
-
- <div class="hr-wave-holder">
- <div class="hr-wave-line hr-wave-line1"></div>
- <div class="hr-wave-line hr-wave-line2"></div>
- </div>
-
- <h2>Supplementary Information</h2>
-
-</section><section><h3>Commercial Use</h3>
-<p>Add a paragraph about how usage extends far beyond academia into research centers for largest companies in the world. And even funnels into CIA funded research in the US and defense industry usage in China.</p>
-</section><section class='applet_container'><div class='applet' data-payload='{"command": "load_file assets/lfw_commercial_use.csv", "fields": ["name_display, company_url, example_url, country, description"]}'></div></section><section><h3>Research</h3>
-<ul>
-<li>"In our experiments, we used 10000 images and associated captions from the Faces in the wilddata set [3]."</li>
-<li>"This work was supported in part by the Center for Intelligent Information Retrieval, the Central Intelligence Agency, the National Security Agency and National Science Foundation under CAREER award IIS-0546666 and grant IIS-0326249."</li>
-<li>From: "People-LDA: Anchoring Topics to People using Face Recognition" <a href="https://www.semanticscholar.org/paper/People-LDA%3A-Anchoring-Topics-to-People-using-Face-Jain-Learned-Miller/10f17534dba06af1ddab96c4188a9c98a020a459">https://www.semanticscholar.org/paper/People-LDA%3A-Anchoring-Topics-to-People-using-Face-Jain-Learned-Miller/10f17534dba06af1ddab96c4188a9c98a020a459</a> and <a href="https://ieeexplore.ieee.org/document/4409055">https://ieeexplore.ieee.org/document/4409055</a></li>
-<li>This paper was presented at IEEE 11th ICCV conference Oct 14-21 and the main LFW paper "Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments" was also published that same year</li>
-<li>10f17534dba06af1ddab96c4188a9c98a020a459</li>
-<li>This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract number 2014-14071600010.</li>
-<li>From "Labeled Faces in the Wild: Updates and New Reporting Procedures"</li>
-<li>70% of people in the dataset have only 1 image and 29% have 2 or more images</li>
-<li>The LFW dataset is considered the "most popular benchmark for face recognition" <a class="footnote_shim" name="[^lfw_baidu]_1"> </a><a href="#[^lfw_baidu]" class="footnote" title="Footnote 2">2</a></li>
-<li>The LFW dataset is "the most widely used evaluation set in the field of facial recognition" <a class="footnote_shim" name="[^lfw_pingan]_2"> </a><a href="#[^lfw_pingan]" class="footnote" title="Footnote 3">3</a></li>
-<li>All images in LFW dataset were obtained "in the wild" meaning without any consent from the subject or from the photographer</li>
-<li>The faces in the LFW dataset were detected using the Viola-Jones haarcascade face detector [^lfw_website] [^lfw-survey]</li>
-<li>The LFW dataset is used by several of the largest tech companies in the world including "Google, Facebook, Microsoft Research Asia, Baidu, Tencent, SenseTime, Face++ and Chinese University of Hong Kong." <a class="footnote_shim" name="[^lfw_pingan]_3"> </a><a href="#[^lfw_pingan]" class="footnote" title="Footnote 3">3</a></li>
-<li>All images in the LFW dataset were copied from Yahoo News between 2002 - 2004</li>
-<li>In 2014, two of the four original authors of the LFW dataset received funding from IARPA and ODNI for their followup paper <a href="https://www.semanticscholar.org/paper/Labeled-Faces-in-the-Wild-%3A-Updates-and-New-Huang-Learned-Miller/2d3482dcff69c7417c7b933f22de606a0e8e42d4">Labeled Faces in the Wild: Updates and New Reporting Procedures</a> via IARPA contract number 2014-14071600010</li>
-<li>The dataset includes 2 images of <a href="http://vis-www.cs.umass.edu/lfw/person/George_Tenet.html">George Tenet</a>, the former Director of Central Intelligence (DCI) for the Central Intelligence Agency whose facial biometrics were eventually used to help train facial recognition software in China and Russia</li>
-<li>./15/155205b8e288fd49bf203135871d66de879c8c04/paper.txt shows usage by DSTO Australia, supported parimal@iisc.ac.in</li>
-</ul>
-</section><section><div class='meta'><div><div class='gray'>Created</div><div>2002 &ndash; 2004</div></div><div><div class='gray'>Images</div><div>13,233</div></div><div><div class='gray'>Identities</div><div>5,749</div></div><div><div class='gray'>Origin</div><div>Yahoo! News Images</div></div><div><div class='gray'>Used by</div><div>Facebook, Google, Microsoft, Baidu, Tencent, SenseTime, Face++, CIA, NSA, IARPA</div></div><div><div class='gray'>Website</div><div><a href="http://vis-www.cs.umass.edu/lfw">umass.edu</a></div></div></div><section><section><ul>
-<li>There are about 3 men for every 1 woman in the LFW dataset<a class="footnote_shim" name="[^lfw_www]_2"> </a><a href="#[^lfw_www]" class="footnote" title="Footnote 1">1</a></li>
-<li>The person with the most images is <a href="http://vis-www.cs.umass.edu/lfw/person/George_W_Bush_comp.html">George W. Bush</a> with 530</li>
-<li>There are about 3 George W. Bush's for every 1 <a href="http://vis-www.cs.umass.edu/lfw/person/Tony_Blair.html">Tony Blair</a></li>
-<li>The LFW dataset includes over 500 actors, 30 models, 10 presidents, 124 basketball players, 24 football players, 11 kings, 7 queens, and 1 <a href="http://vis-www.cs.umass.edu/lfw/person/Moby.html">Moby</a></li>
-<li>In all 3 of the LFW publications [^lfw_original_paper], [^lfw_survey], [^lfw_tech_report] the words "ethics", "consent", and "privacy" appear 0 times</li>
-<li>The word "future" appears 71 times</li>
-<li>* denotes partial funding for related research</li>
-</ul>
-</section><section><h3>References</h3><section><ul class="footnotes"><li>1 <a name="[^lfw_www]" class="footnote_shim"></a><span class="backlinks"><a href="#[^lfw_www]_1">a</a><a href="#[^lfw_www]_2">b</a></span><a href="http://vis-www.cs.umass.edu/lfw/results.html">http://vis-www.cs.umass.edu/lfw/results.html</a>
-</li><li>2 <a name="[^lfw_baidu]" class="footnote_shim"></a><span class="backlinks"><a href="#[^lfw_baidu]_1">a</a></span>Jingtuo Liu, Yafeng Deng, Tao Bai, Zhengping Wei, Chang Huang. Targeting Ultimate Accuracy: Face Recognition via Deep Embedding. <a href="https://arxiv.org/abs/1506.07310">https://arxiv.org/abs/1506.07310</a>
-</li><li>3 <a name="[^lfw_pingan]" class="footnote_shim"></a><span class="backlinks"><a href="#[^lfw_pingan]_1">a</a><a href="#[^lfw_pingan]_2">b</a><a href="#[^lfw_pingan]_3">c</a></span>Lee, Justin. "PING AN Tech facial recognition receives high score in latest LFW test results". BiometricUpdate.com. Feb 13, 2017. <a href="https://www.biometricupdate.com/201702/ping-an-tech-facial-recognition-receives-high-score-in-latest-lfw-test-results">https://www.biometricupdate.com/201702/ping-an-tech-facial-recognition-receives-high-score-in-latest-lfw-test-results</a>
-</li></ul></section></section>
-
- </div>
- <footer>
- <ul class="footer-left">
- <li><a href="/">MegaPixels.cc</a></li>
- <li><a href="/datasets/">Datasets</a></li>
- <li><a href="/about/">About</a></li>
- <li><a href="/about/press/">Press</a></li>
- <li><a href="/about/legal/">Legal and Privacy</a></li>
- </ul>
- <ul class="footer-right">
- <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
- <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
- </ul>
- </footer>
-</body>
-
-<script src="/assets/js/dist/index.js"></script>
-</html> \ No newline at end of file
diff --git a/site/public/datasets/market_1501/index.html b/site/public/datasets/market_1501/index.html
deleted file mode 100644
index 0415f969..00000000
--- a/site/public/datasets/market_1501/index.html
+++ /dev/null
@@ -1,133 +0,0 @@
-<!doctype html>
-<html>
-<head>
- <title>MegaPixels</title>
- <meta charset="utf-8" />
- <meta name="author" content="Adam Harvey" />
- <meta name="description" content="Market-1501 is a dataset is collection of CCTV footage from Tsinghua University" />
- <meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
- <link rel='stylesheet' href='/assets/css/fonts.css' />
- <link rel='stylesheet' href='/assets/css/css.css' />
- <link rel='stylesheet' href='/assets/css/leaflet.css' />
- <link rel='stylesheet' href='/assets/css/applets.css' />
- <link rel='stylesheet' href='/assets/css/mobile .css' />
-</head>
-<body>
- <header>
- <a class='slogan' href="/">
- <div class='logo'></div>
- <div class='site_name'>MegaPixels</div>
- <div class='page_name'>Market 1501</div>
- </a>
- <div class='links'>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- </div>
- </header>
- <div class="content content-dataset">
-
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/market_1501/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Market-1501</span> is a dataset is collection of CCTV footage from Tsinghua University</span></div><div class='hero_subdesc'><span class='bgpad'>The Market-1501 dataset includes 1,261 people from 5 HD surveillance cameras located on campus
-</span></div></div></section><section><h2>Market-1501 Dataset</h2>
-</section><section><div class='right-sidebar'><div class='meta'>
- <div class='gray'>Published</div>
- <div>2015</div>
- </div><div class='meta'>
- <div class='gray'>Images</div>
- <div>32,668 </div>
- </div><div class='meta'>
- <div class='gray'>Identities</div>
- <div>1,501 </div>
- </div><div class='meta'>
- <div class='gray'>Purpose</div>
- <div>Person re-identification</div>
- </div><div class='meta'>
- <div class='gray'>Website</div>
- <div><a href='http://www.liangzheng.org/Project/project_reid.html' target='_blank' rel='nofollow noopener'>liangzheng.org</a></div>
- </div></div><p>[ PAGE UNDER DEVELOPMENT]</p>
-</section><section>
- <h3>Who used Market 1501?</h3>
-
- <p>
- This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
- </p>
-
- </section>
-
-<section class="applet_container">
-<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
-</div> -->
- <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
-</section>
-
-<section class="applet_container">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
-</section>
-
-<section>
-
- <h3>Biometric Trade Routes</h3>
-
- <p>
- To help understand how Market 1501 has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Market 1501 Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
- </p>
-
- </section>
-
-<section class="applet_container fullwidth">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
-</section>
-
-<div class="caption">
- <ul class="map-legend">
- <li class="edu">Academic</li>
- <li class="com">Commercial</li>
- <li class="gov">Military / Government</li>
- </ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
-</div>
-
-
-<section class="applet_container">
-
- <h3>Dataset Citations</h3>
- <p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
- </p>
-
- <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
-</section><section><h2>(ignore) research Notes</h2>
-<ul>
-<li>"MARS is an extension of the Market-1501 dataset. During collection, we placed six near synchronized cameras in the campus of Tsinghua university. There were Five 1,080<em>1920 HD cameras and one 640</em>480 SD camera. MARS consists of 1,261 different pedestrians whom are captured by at least 2 cameras. Given a query tracklet, MARS aims to retrieve tracklets that contain the same ID." - main paper</li>
-<li>bbox "0065C1T0002F0016.jpg", "0065" is the ID of the pedestrian. "C1" denotes the first
-camera (there are totally 6 cameras). "T0002" means the 2th tracklet. "F016" is the 16th frame
-within this tracklet. For the tracklets, their names are accumulated for each ID; but for frames,
-they start from "F001" in each tracklet.</li>
-</ul>
-<p>@proceedings{zheng2016mars,
-title={MARS: A Video Benchmark for Large-Scale Person Re-identification},
-author={Zheng, Liang and Bie, Zhi and Sun, Yifan and Wang, Jingdong and Su, Chi and Wang, Shengjin and Tian, Qi},
-booktitle={European Conference on Computer Vision},
-year={2016},
-organization={Springer}
-}</p>
-</section>
-
- </div>
- <footer>
- <ul class="footer-left">
- <li><a href="/">MegaPixels.cc</a></li>
- <li><a href="/datasets/">Datasets</a></li>
- <li><a href="/about/">About</a></li>
- <li><a href="/about/press/">Press</a></li>
- <li><a href="/about/legal/">Legal and Privacy</a></li>
- </ul>
- <ul class="footer-right">
- <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
- <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
- </ul>
- </footer>
-</body>
-
-<script src="/assets/js/dist/index.js"></script>
-</html> \ No newline at end of file
diff --git a/site/public/datasets/msceleb/assets/notes/index.html b/site/public/datasets/msceleb/assets/notes/index.html
index e9751129..469a7bca 100644
--- a/site/public/datasets/msceleb/assets/notes/index.html
+++ b/site/public/datasets/msceleb/assets/notes/index.html
@@ -49,7 +49,11 @@
<div class='links'>
<a href="/datasets/">Datasets</a>
<a href="/about/">About</a>
+<<<<<<< HEAD
<a href="/about/news">News</a>
+=======
+ <a href="/about/updates/">Updates</a>
+>>>>>>> 76c058b87f94fb1ed7b37869a8082c25c7ab37de
</div>
</header>
<div class="content content-">
diff --git a/site/public/datasets/msceleb/index.html b/site/public/datasets/msceleb/index.html
index 416f19c8..a8172dff 100644
--- a/site/public/datasets/msceleb/index.html
+++ b/site/public/datasets/msceleb/index.html
@@ -49,7 +49,11 @@
<div class='links'>
<a href="/datasets/">Datasets</a>
<a href="/about/">About</a>
+<<<<<<< HEAD
<a href="/about/news">News</a>
+=======
+ <a href="/about/updates/">Updates</a>
+>>>>>>> 76c058b87f94fb1ed7b37869a8082c25c7ab37de
</div>
</header>
<div class="content content-dataset">
@@ -239,7 +243,7 @@
<section>
- <h3>Informaton Supply chain</h3>
+ <h3>Information Supply chain</h3>
<p>
To help understand how Microsoft Celeb has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Microsoft Celebrity Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
diff --git a/site/public/datasets/oxford_town_centre/index.html b/site/public/datasets/oxford_town_centre/index.html
index d681741a..0bf0e1be 100644
--- a/site/public/datasets/oxford_town_centre/index.html
+++ b/site/public/datasets/oxford_town_centre/index.html
@@ -49,7 +49,11 @@
<div class='links'>
<a href="/datasets/">Datasets</a>
<a href="/about/">About</a>
+<<<<<<< HEAD
<a href="/about/news">News</a>
+=======
+ <a href="/about/updates/">Updates</a>
+>>>>>>> 76c058b87f94fb1ed7b37869a8082c25c7ab37de
</div>
</header>
<div class="content content-dataset">
@@ -100,7 +104,7 @@
<section>
- <h3>Informaton Supply chain</h3>
+ <h3>Information Supply chain</h3>
<p>
To help understand how TownCentre has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Oxford Town Centre was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
diff --git a/site/public/datasets/pipa/index.html b/site/public/datasets/pipa/index.html
deleted file mode 100644
index 065f3e47..00000000
--- a/site/public/datasets/pipa/index.html
+++ /dev/null
@@ -1,121 +0,0 @@
-<!doctype html>
-<html>
-<head>
- <title>MegaPixels</title>
- <meta charset="utf-8" />
- <meta name="author" content="Adam Harvey" />
- <meta name="description" content=" People in Photo Albums (PIPA) is a dataset..." />
- <meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
- <link rel='stylesheet' href='/assets/css/fonts.css' />
- <link rel='stylesheet' href='/assets/css/css.css' />
- <link rel='stylesheet' href='/assets/css/leaflet.css' />
- <link rel='stylesheet' href='/assets/css/applets.css' />
- <link rel='stylesheet' href='/assets/css/mobile .css' />
-</head>
-<body>
- <header>
- <a class='slogan' href="/">
- <div class='logo'></div>
- <div class='site_name'>MegaPixels</div>
- <div class='page_name'>PIPA Dataset</div>
- </a>
- <div class='links'>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- </div>
- </header>
- <div class="content content-dataset">
-
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/pipa/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name"> People in Photo Albums (PIPA)</span> is a dataset...</span></div><div class='hero_subdesc'><span class='bgpad'>[ add subdescrition ]
-</span></div></div></section><section><h2>People in Photo Albums</h2>
-</section><section><div class='right-sidebar'><div class='meta'>
- <div class='gray'>Published</div>
- <div>2015</div>
- </div><div class='meta'>
- <div class='gray'>Images</div>
- <div>37,107 </div>
- </div><div class='meta'>
- <div class='gray'>Identities</div>
- <div>2,356 </div>
- </div><div class='meta'>
- <div class='gray'>Purpose</div>
- <div>Face recognition</div>
- </div><div class='meta'>
- <div class='gray'>Download Size</div>
- <div>12 GB</div>
- </div><div class='meta'>
- <div class='gray'>Website</div>
- <div><a href='https://people.eecs.berkeley.edu/~nzhang/piper.html' target='_blank' rel='nofollow noopener'>berkeley.edu</a></div>
- </div></div><p>[ PAGE UNDER DEVELOPMENT ]</p>
-</section><section>
- <h3>Who used PIPA Dataset?</h3>
-
- <p>
- This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
- </p>
-
- </section>
-
-<section class="applet_container">
-<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
-</div> -->
- <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
-</section>
-
-<section class="applet_container">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
-</section>
-
-<section>
-
- <h3>Biometric Trade Routes</h3>
-
- <p>
- To help understand how PIPA Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing People in Photo Albums Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
- </p>
-
- </section>
-
-<section class="applet_container fullwidth">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
-</section>
-
-<div class="caption">
- <ul class="map-legend">
- <li class="edu">Academic</li>
- <li class="com">Commercial</li>
- <li class="gov">Military / Government</li>
- </ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
-</div>
-
-
-<section class="applet_container">
-
- <h3>Dataset Citations</h3>
- <p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
- </p>
-
- <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
-</section>
-
- </div>
- <footer>
- <ul class="footer-left">
- <li><a href="/">MegaPixels.cc</a></li>
- <li><a href="/datasets/">Datasets</a></li>
- <li><a href="/about/">About</a></li>
- <li><a href="/about/press/">Press</a></li>
- <li><a href="/about/legal/">Legal and Privacy</a></li>
- </ul>
- <ul class="footer-right">
- <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
- <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
- </ul>
- </footer>
-</body>
-
-<script src="/assets/js/dist/index.js"></script>
-</html> \ No newline at end of file
diff --git a/site/public/datasets/pubfig/index.html b/site/public/datasets/pubfig/index.html
deleted file mode 100644
index 79644e40..00000000
--- a/site/public/datasets/pubfig/index.html
+++ /dev/null
@@ -1,118 +0,0 @@
-<!doctype html>
-<html>
-<head>
- <title>MegaPixels</title>
- <meta charset="utf-8" />
- <meta name="author" content="Adam Harvey" />
- <meta name="description" content="PubFig is a dataset..." />
- <meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
- <link rel='stylesheet' href='/assets/css/fonts.css' />
- <link rel='stylesheet' href='/assets/css/css.css' />
- <link rel='stylesheet' href='/assets/css/leaflet.css' />
- <link rel='stylesheet' href='/assets/css/applets.css' />
- <link rel='stylesheet' href='/assets/css/mobile .css' />
-</head>
-<body>
- <header>
- <a class='slogan' href="/">
- <div class='logo'></div>
- <div class='site_name'>MegaPixels</div>
- <div class='page_name'>PubFig</div>
- </a>
- <div class='links'>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- </div>
- </header>
- <div class="content content-dataset">
-
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/pubfig/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">PubFig</span> is a dataset...</span></div><div class='hero_subdesc'><span class='bgpad'>[ add subdescrition ]
-</span></div></div></section><section><h2>PubFig</h2>
-</section><section><div class='right-sidebar'><div class='meta'>
- <div class='gray'>Published</div>
- <div>2009</div>
- </div><div class='meta'>
- <div class='gray'>Images</div>
- <div>58,797 </div>
- </div><div class='meta'>
- <div class='gray'>Identities</div>
- <div>200 </div>
- </div><div class='meta'>
- <div class='gray'>Purpose</div>
- <div>mostly names from LFW but includes new names. large variation in pose, lighting, expression, scene, camera, imaging conditions and parameters</div>
- </div><div class='meta'>
- <div class='gray'>Website</div>
- <div><a href='http://www.cs.columbia.edu/CAVE/databases/pubfig/' target='_blank' rel='nofollow noopener'>columbia.edu</a></div>
- </div></div><p>[ PAGE UNDER DEVELOPMENT ]</p>
-</section><section>
- <h3>Who used PubFig?</h3>
-
- <p>
- This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
- </p>
-
- </section>
-
-<section class="applet_container">
-<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
-</div> -->
- <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
-</section>
-
-<section class="applet_container">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
-</section>
-
-<section>
-
- <h3>Biometric Trade Routes</h3>
-
- <p>
- To help understand how PubFig has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Public Figures Face Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
- </p>
-
- </section>
-
-<section class="applet_container fullwidth">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
-</section>
-
-<div class="caption">
- <ul class="map-legend">
- <li class="edu">Academic</li>
- <li class="com">Commercial</li>
- <li class="gov">Military / Government</li>
- </ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
-</div>
-
-
-<section class="applet_container">
-
- <h3>Dataset Citations</h3>
- <p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
- </p>
-
- <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
-</section>
-
- </div>
- <footer>
- <ul class="footer-left">
- <li><a href="/">MegaPixels.cc</a></li>
- <li><a href="/datasets/">Datasets</a></li>
- <li><a href="/about/">About</a></li>
- <li><a href="/about/press/">Press</a></li>
- <li><a href="/about/legal/">Legal and Privacy</a></li>
- </ul>
- <ul class="footer-right">
- <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
- <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
- </ul>
- </footer>
-</body>
-
-<script src="/assets/js/dist/index.js"></script>
-</html> \ No newline at end of file
diff --git a/site/public/datasets/uccs/assets/notes/index.html b/site/public/datasets/uccs/assets/notes/index.html
index 8746ed70..e611476b 100644
--- a/site/public/datasets/uccs/assets/notes/index.html
+++ b/site/public/datasets/uccs/assets/notes/index.html
@@ -49,7 +49,11 @@
<div class='links'>
<a href="/datasets/">Datasets</a>
<a href="/about/">About</a>
+<<<<<<< HEAD
<a href="/about/news">News</a>
+=======
+ <a href="/about/updates/">Updates</a>
+>>>>>>> 76c058b87f94fb1ed7b37869a8082c25c7ab37de
</div>
</header>
<div class="content content-">
diff --git a/site/public/datasets/uccs/index.html b/site/public/datasets/uccs/index.html
index c9ef82e7..24544205 100644
--- a/site/public/datasets/uccs/index.html
+++ b/site/public/datasets/uccs/index.html
@@ -49,7 +49,11 @@
<div class='links'>
<a href="/datasets/">Datasets</a>
<a href="/about/">About</a>
+<<<<<<< HEAD
<a href="/about/news">News</a>
+=======
+ <a href="/about/updates/">Updates</a>
+>>>>>>> 76c058b87f94fb1ed7b37869a8082c25c7ab37de
</div>
</header>
<div class="content content-dataset">
@@ -106,7 +110,7 @@ Their setup made it impossible for students to know they were being photographed
<section>
- <h3>Informaton Supply chain</h3>
+ <h3>Information Supply chain</h3>
<p>
To help understand how UCCS has been used around the world by commercial, military, and academic organizations; existing publicly available research citing UnConstrained College Students Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
diff --git a/site/public/datasets/vgg_face2/index.html b/site/public/datasets/vgg_face2/index.html
deleted file mode 100644
index 7844f5f4..00000000
--- a/site/public/datasets/vgg_face2/index.html
+++ /dev/null
@@ -1,143 +0,0 @@
-<!doctype html>
-<html>
-<head>
- <title>MegaPixels</title>
- <meta charset="utf-8" />
- <meta name="author" content="Adam Harvey" />
- <meta name="description" content="VGG Face 2 Dataset" />
- <meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
- <link rel='stylesheet' href='/assets/css/fonts.css' />
- <link rel='stylesheet' href='/assets/css/css.css' />
- <link rel='stylesheet' href='/assets/css/leaflet.css' />
- <link rel='stylesheet' href='/assets/css/applets.css' />
- <link rel='stylesheet' href='/assets/css/mobile .css' />
-</head>
-<body>
- <header>
- <a class='slogan' href="/">
- <div class='logo'></div>
- <div class='site_name'>MegaPixels</div>
- <div class='page_name'>Brainwash Dataset</div>
- </a>
- <div class='links'>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- </div>
- </header>
- <div class="content content-">
-
- <section><h2>VGG Face 2</h2>
-</section><section><div class='right-sidebar'><div class='meta'>
- <div class='gray'>Published</div>
- <div>2015</div>
- </div><div class='meta'>
- <div class='gray'>Images</div>
- <div>11,917 </div>
- </div><div class='meta'>
- <div class='gray'>Purpose</div>
- <div>Head detection</div>
- </div><div class='meta'>
- <div class='gray'>Created by</div>
- <div>Stanford University (US), Max Planck Institute for Informatics (DE)</div>
- </div><div class='meta'>
- <div class='gray'>Funded by</div>
- <div>Max Planck Center for Visual Computing and Communication</div>
- </div><div class='meta'>
- <div class='gray'>Download Size</div>
- <div>4.1 GB</div>
- </div><div class='meta'>
- <div class='gray'>Website</div>
- <div><a href='https://purl.stanford.edu/sx925dc9385' target='_blank' rel='nofollow noopener'>stanford.edu</a></div>
- </div></div><p>[ page under development ]</p>
-</section><section>
- <h3>Who used Brainwash Dataset?</h3>
-
- <p>
- This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
- </p>
-
- </section>
-
-<section class="applet_container">
-<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
-</div> -->
- <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
-</section>
-
-<section class="applet_container">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
-</section>
-
-<section>
-
- <h3>Biometric Trade Routes</h3>
-
- <p>
- To help understand how Brainwash Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Brainwash Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
- </p>
-
- </section>
-
-<section class="applet_container fullwidth">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
-</section>
-
-<div class="caption">
- <ul class="map-legend">
- <li class="edu">Academic</li>
- <li class="com">Commercial</li>
- <li class="gov">Military / Government</li>
- </ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
-</div>
-
-
-<section class="applet_container">
-
- <h3>Dataset Citations</h3>
- <p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
- </p>
-
- <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
-</section><section><h3>(ignore) research notes</h3>
-<ul>
-<li>The VGG Face 2 dataset includes approximately 1,331 actresses, 139 presidents, 16 wives, 3 husbands, 2 snooker player, and 1 guru</li>
-<li>The original VGGF2 name list has been updated with the results returned from Google Knowledge</li>
-<li>Names with a similarity score greater than 0.75 where automatically updated. Scores computed using <code>import difflib; seq = difflib.SequenceMatcher(a=a.lower(), b=b.lower()); score = seq.ratio()</code></li>
-<li>The 97 names with a score of 0.75 or lower were manually reviewed and includes name changes validating using Wikipedia.org results for names such as "Bruce Jenner" to "Caitlyn Jenner", spousal last-name changes, and discretionary changes to improve search results such as combining nicknames with full name when appropriate, for example changing "Aleksandar Petrović" to "Aleksandar 'Aco' Petrović" and minor changes such as "Mohammad Ali" to "Muhammad Ali"</li>
-<li>The 'Description' text was automatically added when the Knowledge Graph score was greater than 250</li>
-</ul>
-<h2>TODO</h2>
-<ul>
-<li>create name list, and populate with Knowledge graph information like LFW</li>
-<li>make list of interesting number stats, by the numbers</li>
-<li>make list of interesting important facts</li>
-<li>write intro abstract</li>
-<li>write analysis of usage</li>
-<li>find examples, citations, and screenshots of useage</li>
-<li>find list of companies using it for table</li>
-<li>create montages of the dataset, like LFW</li>
-<li>create right to removal information</li>
-</ul>
-</section>
-
- </div>
- <footer>
- <ul class="footer-left">
- <li><a href="/">MegaPixels.cc</a></li>
- <li><a href="/datasets/">Datasets</a></li>
- <li><a href="/about/">About</a></li>
- <li><a href="/about/press/">Press</a></li>
- <li><a href="/about/legal/">Legal and Privacy</a></li>
- </ul>
- <ul class="footer-right">
- <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
- <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
- </ul>
- </footer>
-</body>
-
-<script src="/assets/js/dist/index.js"></script>
-</html> \ No newline at end of file
diff --git a/site/public/datasets/viper/index.html b/site/public/datasets/viper/index.html
deleted file mode 100644
index 320899ea..00000000
--- a/site/public/datasets/viper/index.html
+++ /dev/null
@@ -1,123 +0,0 @@
-<!doctype html>
-<html>
-<head>
- <title>MegaPixels</title>
- <meta charset="utf-8" />
- <meta name="author" content="Adam Harvey" />
- <meta name="description" content="VIPeR is a person re-identification dataset of images captured at UC Santa Cruz in 2007" />
- <meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
- <link rel='stylesheet' href='/assets/css/fonts.css' />
- <link rel='stylesheet' href='/assets/css/css.css' />
- <link rel='stylesheet' href='/assets/css/leaflet.css' />
- <link rel='stylesheet' href='/assets/css/applets.css' />
- <link rel='stylesheet' href='/assets/css/mobile .css' />
-</head>
-<body>
- <header>
- <a class='slogan' href="/">
- <div class='logo'></div>
- <div class='site_name'>MegaPixels</div>
- <div class='page_name'>VIPeR</div>
- </a>
- <div class='links'>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- </div>
- </header>
- <div class="content content-dataset">
-
- <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/viper/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">VIPeR</span> is a person re-identification dataset of images captured at UC Santa Cruz in 2007</span></div><div class='hero_subdesc'><span class='bgpad'>VIPeR contains 1,264 images and 632 persons on the UC Santa Cruz campus and is used to train person re-identification algorithms for surveillance
-</span></div></div></section><section><h2>VIPeR Dataset</h2>
-</section><section><div class='right-sidebar'><div class='meta'>
- <div class='gray'>Published</div>
- <div>2007</div>
- </div><div class='meta'>
- <div class='gray'>Images</div>
- <div>1,264 </div>
- </div><div class='meta'>
- <div class='gray'>Identities</div>
- <div>632 </div>
- </div><div class='meta'>
- <div class='gray'>Purpose</div>
- <div>Person re-identification</div>
- </div><div class='meta'>
- <div class='gray'>Created by</div>
- <div>University of California Santa Cruz</div>
- </div><div class='meta'>
- <div class='gray'>Website</div>
- <div><a href='https://vision.soe.ucsc.edu/node/178' target='_blank' rel='nofollow noopener'>ucsc.edu</a></div>
- </div></div><p>[ page under development ]</p>
-<p><em>VIPeR (Viewpoint Invariant Pedestrian Recognition)</em> is a dataset of pedestrian images captured at University of California Santa Cruz in 2007. Accoriding to the reserachers 2 "cameras were placed in different locations in an academic setting and subjects were notified of the presence of cameras, but were not coached or instructed in any way."</p>
-<p>VIPeR is amongst the most widely used publicly available person re-identification datasets. In 2017 the VIPeR dataset was combined into a larger person re-identification created by the Chinese University of Hong Kong called PETA (PEdesTrian Attribute).</p>
-</section><section>
- <h3>Who used VIPeR?</h3>
-
- <p>
- This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
- </p>
-
- </section>
-
-<section class="applet_container">
-<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
-</div> -->
- <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
-</section>
-
-<section class="applet_container">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
-</section>
-
-<section>
-
- <h3>Biometric Trade Routes</h3>
-
- <p>
- To help understand how VIPeR has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Viewpoint Invariant Pedestrian Recognition was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
- </p>
-
- </section>
-
-<section class="applet_container fullwidth">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
-</section>
-
-<div class="caption">
- <ul class="map-legend">
- <li class="edu">Academic</li>
- <li class="com">Commercial</li>
- <li class="gov">Military / Government</li>
- </ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
-</div>
-
-
-<section class="applet_container">
-
- <h3>Dataset Citations</h3>
- <p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
- </p>
-
- <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
-</section>
-
- </div>
- <footer>
- <ul class="footer-left">
- <li><a href="/">MegaPixels.cc</a></li>
- <li><a href="/datasets/">Datasets</a></li>
- <li><a href="/about/">About</a></li>
- <li><a href="/about/press/">Press</a></li>
- <li><a href="/about/legal/">Legal and Privacy</a></li>
- </ul>
- <ul class="footer-right">
- <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
- <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
- </ul>
- </footer>
-</body>
-
-<script src="/assets/js/dist/index.js"></script>
-</html> \ No newline at end of file
diff --git a/site/public/datasets/youtube_celebrities/index.html b/site/public/datasets/youtube_celebrities/index.html
deleted file mode 100644
index b871ab18..00000000
--- a/site/public/datasets/youtube_celebrities/index.html
+++ /dev/null
@@ -1,114 +0,0 @@
-<!doctype html>
-<html>
-<head>
- <title>MegaPixels</title>
- <meta charset="utf-8" />
- <meta name="author" content="Adam Harvey" />
- <meta name="description" content="YouTube Celebrities" />
- <meta name="referrer" content="no-referrer" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
- <link rel='stylesheet' href='/assets/css/fonts.css' />
- <link rel='stylesheet' href='/assets/css/css.css' />
- <link rel='stylesheet' href='/assets/css/leaflet.css' />
- <link rel='stylesheet' href='/assets/css/applets.css' />
- <link rel='stylesheet' href='/assets/css/mobile .css' />
-</head>
-<body>
- <header>
- <a class='slogan' href="/">
- <div class='logo'></div>
- <div class='site_name'>MegaPixels</div>
- <div class='page_name'>YouTube Celebrities</div>
- </a>
- <div class='links'>
- <a href="/datasets/">Datasets</a>
- <a href="/about/">About</a>
- </div>
- </header>
- <div class="content content-">
-
- <section><h2>YouTube Celebrities</h2>
-</section><section><div class='right-sidebar'></div><p>[ page under development ]</p>
-</section><section>
- <h3>Who used YouTube Celebrities?</h3>
-
- <p>
- This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
- </p>
-
- </section>
-
-<section class="applet_container">
-<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
-</div> -->
- <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
-</section>
-
-<section class="applet_container">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
-</section>
-
-<section>
-
- <h3>Biometric Trade Routes</h3>
-
- <p>
- To help understand how YouTube Celebrities has been used around the world by commercial, military, and academic organizations; existing publicly available research citing YouTube Celebrities was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
- </p>
-
- </section>
-
-<section class="applet_container fullwidth">
- <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
-</section>
-
-<div class="caption">
- <ul class="map-legend">
- <li class="edu">Academic</li>
- <li class="com">Commercial</li>
- <li class="gov">Military / Government</li>
- </ul>
- <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
-</div>
-
-
-<section class="applet_container">
-
- <h3>Dataset Citations</h3>
- <p>
- The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. If you use our data, please <a href="/about/attribution">cite our work</a>.
- </p>
-
- <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
-</section><section><h4>Notes...</h4>
-<ul>
-<li>Selected dataset sequences: (a) MBGC, (b) CMU MoBo, (c) First
-Honda/UCSD, and (d) YouTube Celebrities.</li>
-<li>This research is supported by the Central Intelligence Agency, the Biometrics
-Task Force and the Technical Support Working Group through US Army contract
-W91CRB-08-C-0093. The opinions, (cid:12)ndings, and conclusions or recommendations
-expressed in this publication are those of the authors and do not necessarily re(cid:13)ect
-the views of our sponsors.</li>
-<li>in "Face Recognition From Video Draft 17"</li>
-<li>International Journal of Pattern Recognition and Artifcial Intelligence WorldScientific Publishing Company</li>
-</ul>
-</section>
-
- </div>
- <footer>
- <ul class="footer-left">
- <li><a href="/">MegaPixels.cc</a></li>
- <li><a href="/datasets/">Datasets</a></li>
- <li><a href="/about/">About</a></li>
- <li><a href="/about/press/">Press</a></li>
- <li><a href="/about/legal/">Legal and Privacy</a></li>
- </ul>
- <ul class="footer-right">
- <li>MegaPixels &copy;2017-19 &nbsp;<a href="https://ahprojects.com">Adam R. Harvey</a></li>
- <li>Made with support from &nbsp;<a href="https://mozilla.org">Mozilla</a></li>
- </ul>
- </footer>
-</body>
-
-<script src="/assets/js/dist/index.js"></script>
-</html> \ No newline at end of file