diff options
Diffstat (limited to 'site/public/datasets')
| -rw-r--r-- | site/public/datasets/50_people_one_question/index.html | 114 | ||||
| -rw-r--r-- | site/public/datasets/afad/index.html | 127 | ||||
| -rw-r--r-- | site/public/datasets/aflw/index.html | 53 | ||||
| -rw-r--r-- | site/public/datasets/brainwash/index.html | 146 | ||||
| -rw-r--r-- | site/public/datasets/caltech_10k/index.html | 124 | ||||
| -rw-r--r-- | site/public/datasets/celeba/index.html | 126 | ||||
| -rw-r--r-- | site/public/datasets/cofw/index.html | 179 | ||||
| -rw-r--r-- | site/public/datasets/facebook/index.html | 54 | ||||
| -rw-r--r-- | site/public/datasets/feret/index.html | 87 | ||||
| -rw-r--r-- | site/public/datasets/lfpw/index.html | 116 | ||||
| -rw-r--r-- | site/public/datasets/market_1501/index.html | 132 | ||||
| -rw-r--r-- | site/public/datasets/pipa/index.html | 120 | ||||
| -rw-r--r-- | site/public/datasets/pubfig/index.html | 117 | ||||
| -rw-r--r-- | site/public/datasets/uccs/index.html | 255 | ||||
| -rw-r--r-- | site/public/datasets/vgg_face2/index.html | 142 | ||||
| -rw-r--r-- | site/public/datasets/viper/index.html | 122 | ||||
| -rw-r--r-- | site/public/datasets/youtube_celebrities/index.html | 113 |
17 files changed, 0 insertions, 2127 deletions
diff --git a/site/public/datasets/50_people_one_question/index.html b/site/public/datasets/50_people_one_question/index.html deleted file mode 100644 index 3b33f530..00000000 --- a/site/public/datasets/50_people_one_question/index.html +++ /dev/null @@ -1,114 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content="People One Question is a dataset of people from an online video series on YouTube and Vimeo used for building facial recogntion algorithms" /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - <div class='splash'>50 People One Question Dataset</div> - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-dataset"> - - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/50_people_one_question/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span style="color:#ffaa00">People One Question</span> is a dataset of people from an online video series on YouTube and Vimeo used for building facial recogntion algorithms</span></div><div class='hero_subdesc'><span class='bgpad'>People One Question dataset includes ... -</span></div></div></section><section><div class='left-sidebar'><div class='meta'> - <div class='gray'>Published</div> - <div>2013</div> - </div><div class='meta'> - <div class='gray'>Videos</div> - <div>33 </div> - </div><div class='meta'> - <div class='gray'>Purpose</div> - <div>Facial landmark estimation</div> - </div><div class='meta'> - <div class='gray'>Website</div> - <div><a href='http://www.vision.caltech.edu/~dhall/projects/MergingPoseEstimates/' target='_blank' rel='nofollow noopener'>caltech.edu</a></div> - </div></div><h2>50 People 1 Question</h2> -<p>[ page under development ]</p> -</section><section> - <h3>Who used 50 People One Question Dataset?</h3> - - <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. - </p> - - </section> - -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section> - -<section class="applet_container"> - <div class="applet" data-payload="{"command": "piechart"}"></div> -</section> - -<section> - - <h3>Biometric Trade Routes</h3> - - <p> - To help understand how 50 People One Question Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing 50 People One Question was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. - </p> - - </section> - -<section class="applet_container fullwidth"> - <div class="applet" data-payload="{"command": "map"}"></div> -</section> - -<div class="caption"> - <ul class="map-legend"> - <li class="edu">Academic</li> - <li class="com">Commercial</li> - <li class="gov">Military / Government</li> - </ul> - <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > -</div> - - -<section class="applet_container"> - - <h3>Dataset Citations</h3> - <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file diff --git a/site/public/datasets/afad/index.html b/site/public/datasets/afad/index.html deleted file mode 100644 index 67a4e981..00000000 --- a/site/public/datasets/afad/index.html +++ /dev/null @@ -1,127 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content="AFAD: Asian Face Age Dataset" /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - <div class='splash'>Asian Face Age Dataset</div> - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-"> - - <section><div class='left-sidebar'><div class='meta'> - <div class='gray'>Published</div> - <div>2017</div> - </div><div class='meta'> - <div class='gray'>Images</div> - <div>164,432 </div> - </div><div class='meta'> - <div class='gray'>Purpose</div> - <div>age estimation on Asian Faces</div> - </div><div class='meta'> - <div class='gray'>Funded by</div> - <div>NSFC, the Fundamental Research Funds for the Central Universities, the Program for Changjiang Scholars and Innovative Research Team in University of China, the Shaanxi Innovative Research Team for Key Science and Technology, and China Postdoctoral Science Foundation</div> - </div><div class='meta'> - <div class='gray'>Website</div> - <div><a href='https://afad-dataset.github.io/' target='_blank' rel='nofollow noopener'>github.io</a></div> - </div></div><h2>Asian Face Age Dataset</h2> -<p>[ page under development ]</p> -</section><section> - <h3>Who used Asian Face Age Dataset?</h3> - - <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. - </p> - - </section> - -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section> - -<section class="applet_container"> - <div class="applet" data-payload="{"command": "piechart"}"></div> -</section> - -<section> - - <h3>Biometric Trade Routes</h3> - - <p> - To help understand how Asian Face Age Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing The Asian Face Age Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. - </p> - - </section> - -<section class="applet_container fullwidth"> - <div class="applet" data-payload="{"command": "map"}"></div> -</section> - -<div class="caption"> - <ul class="map-legend"> - <li class="edu">Academic</li> - <li class="com">Commercial</li> - <li class="gov">Military / Government</li> - </ul> - <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > -</div> - - -<section class="applet_container"> - - <h3>Dataset Citations</h3> - <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section><section><h2>(ignore) research notes</h2> -<blockquote><p>The Asian Face Age Dataset (AFAD) is a new dataset proposed for evaluating the performance of age estimation, which contains more than 160K facial images and the corresponding age and gender labels. This dataset is oriented to age estimation on Asian faces, so all the facial images are for Asian faces. It is noted that the AFAD is the biggest dataset for age estimation to date. It is well suited to evaluate how deep learning methods can be adopted for age estimation. -Motivation</p> -<p>For age estimation, there are several public datasets for evaluating the performance of a specific algorithm, such as FG-NET [1] (1002 face images), MORPH I (1690 face images), and MORPH II[2] (55,608 face images). Among them, the MORPH II is the biggest public dataset to date. On the other hand, as we know it is necessary to collect a large scale dataset to train a deep Convolutional Neural Network. Therefore, the MORPH II dataset is extensively used to evaluate how deep learning methods can be adopted for age estimation [3][4].</p> -<p>However, the ethnic is very unbalanced for the MORPH II dataset, i.e., it has only less than 1% Asian faces. In order to evaluate the previous methods for age estimation on Asian Faces, the Asian Face Age Dataset (AFAD) was proposed.</p> -<p>There are 164,432 well-labeled photos in the AFAD dataset. It consist of 63,680 photos for female as well as 100,752 photos for male, and the ages range from 15 to 40. The distribution of photo counts for distinct ages are illustrated in the figure above. Some samples are shown in the Figure on the top. Its download link is provided in the "Download" section.</p> -<p>In addition, we also provide a subset of the AFAD dataset, called AFAD-Lite, which only contains PLACEHOLDER well-labeled photos. It consist of PLACEHOLDER photos for female as well as PLACEHOLDER photos for male, and the ages range from 15 to 40. The distribution of photo counts for distinct ages are illustrated in Fig. PLACEHOLDER. Its download link is also provided in the "Download" section.</p> -<p>The AFAD dataset is built by collecting selfie photos on a particular social network -- RenRen Social Network (RSN) [5]. The RSN is widely used by Asian students including middle school, high school, undergraduate, and graduate students. Even after leaving from school, some people still access their RSN account to connect with their old classmates. So, the age of the RSN user crosses a wide range from 15-years to more than 40-years old.</p> -<p>Please notice that this dataset is made available for academic research purpose only.</p> -</blockquote> -<p><a href="https://afad-dataset.github.io/">https://afad-dataset.github.io/</a></p> -</section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file diff --git a/site/public/datasets/aflw/index.html b/site/public/datasets/aflw/index.html deleted file mode 100644 index 81fb7335..00000000 --- a/site/public/datasets/aflw/index.html +++ /dev/null @@ -1,53 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content="AFLW: Annotated Facial Landmarks in The Wild" /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-"> - - <section><h1>Annotated Facial Landmarks in The Wild</h1> -</section><section><div class='meta'><div><div class='gray'>Years</div><div>1993-1996</div></div><div><div class='gray'>Images</div><div>25,993</div></div><div><div class='gray'>Identities</div><div>1,199 </div></div><div><div class='gray'>Origin</div><div>Flickr</div></div></div><section><section><!--header--></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/aflw/aflw_index.gif' alt=''></div></section><section><p>RESEARCH below this line</p> -<blockquote><p>The motivation for the AFLW database is the need for a large-scale, multi-view, real-world face database with annotated facial features. We gathered the images on Flickr using a wide range of face relevant tags (e.g., face, mugshot, profile face). The downloaded set of images was manually scanned for images containing faces. The key data and most important properties of the database are:</p> -</blockquote> -<p><a href="https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/aflw/">https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/aflw/</a></p> -</section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file diff --git a/site/public/datasets/brainwash/index.html b/site/public/datasets/brainwash/index.html deleted file mode 100644 index bd59f573..00000000 --- a/site/public/datasets/brainwash/index.html +++ /dev/null @@ -1,146 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content="Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco in 2014" /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - <div class='splash'>Brainwash Dataset</div> - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-dataset"> - - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco in 2014</span></div><div class='hero_subdesc'><span class='bgpad'>The Brainwash dataset includes 11,918 images of "everyday life of a busy downtown cafe" and is used for training head detection surveillance algorithms -</span></div></div></section><section><div class='left-sidebar'><div class='meta'> - <div class='gray'>Published</div> - <div>2015</div> - </div><div class='meta'> - <div class='gray'>Images</div> - <div>11,917 </div> - </div><div class='meta'> - <div class='gray'>Purpose</div> - <div>Head detection</div> - </div><div class='meta'> - <div class='gray'>Created by</div> - <div>Stanford University (US), Max Planck Institute for Informatics (DE)</div> - </div><div class='meta'> - <div class='gray'>Funded by</div> - <div>Max Planck Center for Visual Computing and Communication</div> - </div><div class='meta'> - <div class='gray'>Download Size</div> - <div>4.1 GB</div> - </div><div class='meta'> - <div class='gray'>Website</div> - <div><a href='https://purl.stanford.edu/sx925dc9385' target='_blank' rel='nofollow noopener'>stanford.edu</a></div> - </div></div><h2>Brainwash Dataset</h2> -<p><em>Brainwash</em> is a head detection dataset created from San Francisco's Brainwash Cafe livecam footage. It includes 11,918 images of "everyday life of a busy downtown cafe"<a class="footnote_shim" name="[^readme]_1"> </a><a href="#[^readme]" class="footnote" title="Footnote 1">1</a> captured at 100 second intervals throught the entire day. Brainwash dataset was captured during 3 days in 2014: October 27, November 13, and November 24. According the author's reserach paper introducing the dataset, the images were acquired with the help of Angelcam.com.<a class="footnote_shim" name="[^end_to_end]_1"> </a><a href="#[^end_to_end]" class="footnote" title="Footnote 2">2</a></p> -<p>Brainwash is not a widely used dataset but since its publication by Stanford University in 2015, it has notably appeared in several research papers from the National University of Defense Technology in Changsha, China. In 2016 and in 2017 researchers there conducted studies on detecting people's heads in crowded scenes for the purpose of surveillance. <a class="footnote_shim" name="[^localized_region_context]_1"> </a><a href="#[^localized_region_context]" class="footnote" title="Footnote 3">3</a> <a class="footnote_shim" name="[^replacement_algorithm]_1"> </a><a href="#[^replacement_algorithm]" class="footnote" title="Footnote 4">4</a></p> -<p>If you happen to have been at Brainwash cafe in San Francisco at any time on October 26, November 13, or November 24 in 2014 you are most likely included in the Brainwash dataset and have unwittingly contributed to surveillance research.</p> -</section><section> - <h3>Who used Brainwash Dataset?</h3> - - <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. - </p> - - </section> - -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section> - -<section class="applet_container"> - <div class="applet" data-payload="{"command": "piechart"}"></div> -</section> - -<section> - - <h3>Biometric Trade Routes</h3> - - <p> - To help understand how Brainwash Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Brainwash Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. - </p> - - </section> - -<section class="applet_container fullwidth"> - <div class="applet" data-payload="{"command": "map"}"></div> -</section> - -<div class="caption"> - <ul class="map-legend"> - <li class="edu">Academic</li> - <li class="com">Commercial</li> - <li class="gov">Military / Government</li> - </ul> - <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > -</div> - - -<section class="applet_container"> - - <h3>Dataset Citations</h3> - <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section><section> - - <div class="hr-wave-holder"> - <div class="hr-wave-line hr-wave-line1"></div> - <div class="hr-wave-line hr-wave-line2"></div> - </div> - - <h2>Supplementary Information</h2> - -</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_saliency_map.jpg' alt=' A visualization of 81,973 head annotations from the Brainwash dataset training partition. © megapixels.cc'><div class='caption'> A visualization of 81,973 head annotations from the Brainwash dataset training partition. © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/00425000_960.jpg' alt=' An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains about 12,000 images. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains about 12,000 images. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_montage.jpg' alt=' 49 of the 11,918 images included in the Brainwash dataset. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> 49 of the 11,918 images included in the Brainwash dataset. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section><p>TODO</p> -<ul> -<li>change supp images to 2x2 grid with bboxes</li> -<li>add bounding boxes to the header image</li> -<li>remake montage with randomized images, with bboxes</li> -<li>add ethics link to Stanford</li> -<li>add optout info</li> -</ul> -</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^readme]" class="footnote_shim"></a><span class="backlinks"><a href="#[^readme]_1">a</a></span><p>"readme.txt" <a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">https://exhibits.stanford.edu/data/catalog/sx925dc9385</a>.</p> -</li><li><a name="[^end_to_end]" class="footnote_shim"></a><span class="backlinks"><a href="#[^end_to_end]_1">a</a></span><p>Stewart, Russel. Andriluka, Mykhaylo. "End-to-end people detection in crowded scenes". 2016.</p> -</li><li><a name="[^localized_region_context]" class="footnote_shim"></a><span class="backlinks"><a href="#[^localized_region_context]_1">a</a></span><p>Li, Y. and Dou, Y. and Liu, X. and Li, T. Localized Region Context and Object Feature Fusion for People Head Detection. ICIP16 Proceedings. 2016. Pages 594-598.</p> -</li><li><a name="[^replacement_algorithm]" class="footnote_shim"></a><span class="backlinks"><a href="#[^replacement_algorithm]_1">a</a></span><p>Zhao. X, Wang Y, Dou, Y. A Replacement Algorithm of Non-Maximum Suppression Base on Graph Clustering.</p> -</li></ul></section></section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file diff --git a/site/public/datasets/caltech_10k/index.html b/site/public/datasets/caltech_10k/index.html deleted file mode 100644 index 10925b09..00000000 --- a/site/public/datasets/caltech_10k/index.html +++ /dev/null @@ -1,124 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content="Caltech 10K Faces Dataset" /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - <div class='splash'>Brainwash Dataset</div> - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-"> - - <section><div class='left-sidebar'><div class='meta'> - <div class='gray'>Published</div> - <div>2015</div> - </div><div class='meta'> - <div class='gray'>Images</div> - <div>11,917 </div> - </div><div class='meta'> - <div class='gray'>Purpose</div> - <div>Head detection</div> - </div><div class='meta'> - <div class='gray'>Created by</div> - <div>Stanford University (US), Max Planck Institute for Informatics (DE)</div> - </div><div class='meta'> - <div class='gray'>Funded by</div> - <div>Max Planck Center for Visual Computing and Communication</div> - </div><div class='meta'> - <div class='gray'>Download Size</div> - <div>4.1 GB</div> - </div><div class='meta'> - <div class='gray'>Website</div> - <div><a href='https://purl.stanford.edu/sx925dc9385' target='_blank' rel='nofollow noopener'>stanford.edu</a></div> - </div></div><h2>Caltech 10K Faces Dataset</h2> -<p>[ page under development ]</p> -</section><section> - <h3>Who used Brainwash Dataset?</h3> - - <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. - </p> - - </section> - -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section> - -<section class="applet_container"> - <div class="applet" data-payload="{"command": "piechart"}"></div> -</section> - -<section> - - <h3>Biometric Trade Routes</h3> - - <p> - To help understand how Brainwash Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Brainwash Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. - </p> - - </section> - -<section class="applet_container fullwidth"> - <div class="applet" data-payload="{"command": "map"}"></div> -</section> - -<div class="caption"> - <ul class="map-legend"> - <li class="edu">Academic</li> - <li class="com">Commercial</li> - <li class="gov">Military / Government</li> - </ul> - <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > -</div> - - -<section class="applet_container"> - - <h3>Dataset Citations</h3> - <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section><section><h3>(ignore) research notes</h3> -<p>The dataset contains images of people collected from the web by typing common given names into Google Image Search. The coordinates of the eyes, the nose and the center of the mouth for each frontal face are provided in a ground truth file. This information can be used to align and crop the human faces or as a ground truth for a face detection algorithm. The dataset has 10,524 human faces of various resolutions and in different settings, e.g. portrait images, groups of people, etc. Profile faces or very low resolution faces are not labeled.</p> -</section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file diff --git a/site/public/datasets/celeba/index.html b/site/public/datasets/celeba/index.html deleted file mode 100644 index 3b9883dc..00000000 --- a/site/public/datasets/celeba/index.html +++ /dev/null @@ -1,126 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content="CelebA is a dataset of people..." /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - <div class='splash'>CelebA Dataset</div> - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-dataset"> - - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/celeba/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span style="color:#ffaa00">CelebA</span> is a dataset of people...</span></div><div class='hero_subdesc'><span class='bgpad'>CelebA includes... -</span></div></div></section><section><div class='left-sidebar'><div class='meta'> - <div class='gray'>Published</div> - <div>2015</div> - </div><div class='meta'> - <div class='gray'>Images</div> - <div>202,599 </div> - </div><div class='meta'> - <div class='gray'>Identities</div> - <div>10,177 </div> - </div><div class='meta'> - <div class='gray'>Purpose</div> - <div>face attribute recognition, face detection, and landmark (or facial part) localization</div> - </div><div class='meta'> - <div class='gray'>Download Size</div> - <div>1.4 GB</div> - </div><div class='meta'> - <div class='gray'>Website</div> - <div><a href='http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html' target='_blank' rel='nofollow noopener'>edu.hk</a></div> - </div></div><h2>CelebA Dataset</h2> -<p>[ PAGE UNDER DEVELOPMENT ]</p> -</section><section> - <h3>Who used CelebA Dataset?</h3> - - <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. - </p> - - </section> - -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section> - -<section class="applet_container"> - <div class="applet" data-payload="{"command": "piechart"}"></div> -</section> - -<section> - - <h3>Biometric Trade Routes</h3> - - <p> - To help understand how CelebA Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Large-scale CelebFaces Attributes Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. - </p> - - </section> - -<section class="applet_container fullwidth"> - <div class="applet" data-payload="{"command": "map"}"></div> -</section> - -<div class="caption"> - <ul class="map-legend"> - <li class="edu">Academic</li> - <li class="com">Commercial</li> - <li class="gov">Military / Government</li> - </ul> - <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > -</div> - - -<section class="applet_container"> - - <h3>Dataset Citations</h3> - <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section><section><h3>Research</h3> -<ul> -<li>"An Unsupervised Approach to Solving Inverse Problems using Generative Adversarial Networks" mentions use by sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their"</li> -<li>7dab6fbf42f82f0f5730fc902f72c3fb628ef2f0</li> -<li>principal responsibility is ensuring the safety, security and reliability of the nation's nuclear weapons NNSA ( National Nuclear Security Administration )</li> -</ul> -</section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file diff --git a/site/public/datasets/cofw/index.html b/site/public/datasets/cofw/index.html deleted file mode 100644 index f335442c..00000000 --- a/site/public/datasets/cofw/index.html +++ /dev/null @@ -1,179 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content="COFW: Caltech Occluded Faces in The Wild" /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - <div class='splash'>COFW Dataset</div> - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-"> - - <section><div class='left-sidebar'><div class='meta'> - <div class='gray'>Published</div> - <div>2013</div> - </div><div class='meta'> - <div class='gray'>Images</div> - <div>1,007 </div> - </div><div class='meta'> - <div class='gray'>Purpose</div> - <div>challenging dataset (sunglasses, hats, interaction with objects)</div> - </div><div class='meta'> - <div class='gray'>Website</div> - <div><a href='http://www.vision.caltech.edu/xpburgos/ICCV13/' target='_blank' rel='nofollow noopener'>caltech.edu</a></div> - </div></div><h2>Caltech Occluded Faces in the Wild</h2> -<p>[ PAGE UNDER DEVELOPMENT ]</p> -</section><section> - <h3>Who used COFW Dataset?</h3> - - <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. - </p> - - </section> - -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section> - -<section class="applet_container"> - <div class="applet" data-payload="{"command": "piechart"}"></div> -</section> - -<section> - - <h3>Biometric Trade Routes</h3> - - <p> - To help understand how COFW Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Caltech Occluded Faces in the Wild was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. - </p> - - </section> - -<section class="applet_container fullwidth"> - <div class="applet" data-payload="{"command": "map"}"></div> -</section> - -<div class="caption"> - <ul class="map-legend"> - <li class="edu">Academic</li> - <li class="com">Commercial</li> - <li class="gov">Military / Government</li> - </ul> - <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > -</div> - - -<section class="applet_container"> - - <h3>Dataset Citations</h3> - <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section><section><h3>(ignore) research notes</h3> -</section><section><div class='meta'><div><div class='gray'>Years</div><div>1993-1996</div></div><div><div class='gray'>Images</div><div>14,126</div></div><div><div class='gray'>Identities</div><div>1,199 </div></div><div><div class='gray'>Origin</div><div>Web Searches</div></div><div><div class='gray'>Funded by</div><div>ODNI, IARPA, Microsoft</div></div></div><section><section><p>COFW is "is designed to benchmark face landmark algorithms in realistic conditions, which include heavy occlusions and large shape variations" [Robust face landmark estimation under occlusion].</p> -<blockquote><p>We asked four people with different levels of computer vision knowledge to each collect 250 faces representative of typical real-world images, with the clear goal of challenging computer vision methods. -The result is 1,007 images of faces obtained from a variety of sources.</p> -</blockquote> -<p>Robust face landmark estimation under occlusion</p> -<blockquote><p>Our face dataset is designed to present faces in real-world conditions. Faces show large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food, hands, microphones, etc.). All images were hand annotated in our lab using the same 29 landmarks as in LFPW. We annotated both the landmark positions as well as their occluded/unoccluded state. The faces are occluded to different degrees, with large variations in the type of occlusions encountered. COFW has an average occlusion of over 23%. -To increase the number of training images, and since COFW has the exact same landmarks as LFPW, for training we use the original non-augmented 845 LFPW faces + 500 COFW faces (1345 total), and for testing the remaining 507 COFW faces. To make sure all images had occlusion labels, we annotated occlusion on the available 845 LFPW training images, finding an average of only 2% occlusion.</p> -</blockquote> -<p><a href="http://www.vision.caltech.edu/xpburgos/ICCV13/">http://www.vision.caltech.edu/xpburgos/ICCV13/</a></p> -<blockquote><p>This research is supported by NSF Grant 0954083 and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R&D Contract No. 2014-14071600012.</p> -</blockquote> -<p><a href="https://www.cs.cmu.edu/~peiyunh/topdown/">https://www.cs.cmu.edu/~peiyunh/topdown/</a></p> -</section><section> - - <h3>Biometric Trade Routes</h3> - - <p> - To help understand how COFW Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Caltech Occluded Faces in the Wild was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the location markers to reveal research projects at that location. - </p> - - </section> - -<section class="applet_container fullwidth"> - <div class="applet" data-payload="{"command": "map"}"></div> -</section> - -<div class="caption"> - <ul class="map-legend"> - <li class="edu">Academic</li> - <li class="com">Commercial</li> - <li class="gov">Military / Government</li> - </ul> - <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> and then dataset usage verified and geolocated.</div > -</div><section> - - <div class="hr-wave-holder"> - <div class="hr-wave-line hr-wave-line1"></div> - <div class="hr-wave-line hr-wave-line2"></div> - </div> - - <h2>Supplementary Information</h2> - -</section><section class="applet_container"> - - <h3>Dataset Citations</h3> - <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section><section> - <h3>Who used COFW Dataset?</h3> - - <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. - </p> - - </section> - -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section><section><p>TODO</p> -<h2>- replace graphic</h2> -</section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file diff --git a/site/public/datasets/facebook/index.html b/site/public/datasets/facebook/index.html deleted file mode 100644 index be413510..00000000 --- a/site/public/datasets/facebook/index.html +++ /dev/null @@ -1,54 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content="TBD" /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-"> - - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/facebook/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>TBD</span></div><div class='hero_subdesc'><span class='bgpad'>TBD -</span></div></div></section><section><div class='image'><div class='intro-caption caption'>TBD</div></div></section><section><h3>Statistics</h3> -<div class='meta'><div><div class='gray'>Years</div><div>2002-2004</div></div><div><div class='gray'>Images</div><div>13,233</div></div><div><div class='gray'>Identities</div><div>5,749</div></div><div><div class='gray'>Origin</div><div>Yahoo News Images</div></div><div><div class='gray'>Funding</div><div>(Possibly, partially CIA)</div></div></div><p>Ignore content below these lines</p> -<ul> -<li>Tool to create face datasets from Facebook <a href="https://github.com/ankitaggarwal011/FaceGrab">https://github.com/ankitaggarwal011/FaceGrab</a></li> -</ul> -</section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file diff --git a/site/public/datasets/feret/index.html b/site/public/datasets/feret/index.html deleted file mode 100644 index 5cd29c4c..00000000 --- a/site/public/datasets/feret/index.html +++ /dev/null @@ -1,87 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content="LFW: Labeled Faces in The Wild" /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - <div class='splash'>LFW</div> - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-"> - - <section><div class='left-sidebar'><div class='meta'> - <div class='gray'>Published</div> - <div>2007</div> - </div><div class='meta'> - <div class='gray'>Images</div> - <div>13,233 </div> - </div><div class='meta'> - <div class='gray'>Identities</div> - <div>5,749 </div> - </div><div class='meta'> - <div class='gray'>Purpose</div> - <div>face recognition</div> - </div><div class='meta'> - <div class='gray'>Website</div> - <div><a href='http://vis-www.cs.umass.edu/lfw/' target='_blank' rel='nofollow noopener'>umass.edu</a></div> - </div><h1>FacE REcognition Dataset (FERET)</h1> -<p>[ page under development ]</p> -<p>{% include 'dashboard.html' %}</p> -<h3>(ignore) RESEARCH below this line</h3> -<ul> -<li>Years: 1993-1996</li> -<li>Images: 14,126</li> -<li>Identities: 1,199 </li> -<li>Origin: Fairfax, MD</li> -<li><em>Facial Recognition Evaluation</em> (FERET) is develop, test, and evaluate face recognition algorithms</li> -<li>The goal of the FERET program was to develop automatic face recognition capabilities that could be employed to assist security, intelligence, and law enforcement personnel in the performance of their duties.</li> -<li><a href="https://www.nist.gov/programs-projects/face-recognition-technology-feret">https://www.nist.gov/programs-projects/face-recognition-technology-feret</a></li> -</ul> -<h3>"The FERET database and evaluation procedure for face-recognition algorithms"</h3> -<ul> -<li>Images were captured using Kodak Ultra film</li> -<li>The facial images were collected in 11 sessions from August 1993 to December 1994. Conducted at George Mason University and at US Army Research Laboratory facilities, </li> -</ul> -<h3>FERET (Face Recognition Technology) Recognition Algorithm Development and Test Results</h3> -<ul> -<li>"A release form is necessary because of the privacy laws in the United States."</li> -</ul> -</div><h2>Funding</h2> -<p>The FERET program is sponsored by the U.S. Depart- ment of Defense’s Counterdrug Technology Development Program Office. The U.S. Army Research Laboratory (ARL) is the technical agent for the FERET program. ARL designed, administered, and scored the FERET tests. George Mason University collected, processed, and main- tained the FERET database. Inquiries regarding the FERET database or test should be directed to P. Jonathon Phillips.</p> -</section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file diff --git a/site/public/datasets/lfpw/index.html b/site/public/datasets/lfpw/index.html deleted file mode 100644 index 005b7aaa..00000000 --- a/site/public/datasets/lfpw/index.html +++ /dev/null @@ -1,116 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content="LFPW: Labeled Face Parts in The Wild" /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - <div class='splash'>LFWP</div> - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-"> - - <section><div class='left-sidebar'><div class='meta'> - <div class='gray'>Published</div> - <div>2011</div> - </div><div class='meta'> - <div class='gray'>Funded by</div> - <div>CIA</div> - </div><div class='meta'> - <div class='gray'>Website</div> - <div><a href='http://neerajkumar.org/databases/lfpw/' target='_blank' rel='nofollow noopener'>neerajkumar.org</a></div> - </div></div><h2>Labeled Face Parts in The Wild</h2> -</section><section> - <h3>Who used LFWP?</h3> - - <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. - </p> - - </section> - -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section> - -<section class="applet_container"> - <div class="applet" data-payload="{"command": "piechart"}"></div> -</section> - -<section> - - <h3>Biometric Trade Routes</h3> - - <p> - To help understand how LFWP has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Labeled Face Parts in the Wild was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. - </p> - - </section> - -<section class="applet_container fullwidth"> - <div class="applet" data-payload="{"command": "map"}"></div> -</section> - -<div class="caption"> - <ul class="map-legend"> - <li class="edu">Academic</li> - <li class="com">Commercial</li> - <li class="gov">Military / Government</li> - </ul> - <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > -</div> - - -<section class="applet_container"> - - <h3>Dataset Citations</h3> - <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section><section><p>RESEARCH below this line</p> -<blockquote><p>Release 1 of LFPW consists of 1,432 faces from images downloaded from the web using simple text queries on sites such as google.com, flickr.com, and yahoo.com. Each image was labeled by three MTurk workers, and 29 fiducial points, shown below, are included in dataset. LFPW was originally described in the following publication:</p> -<p>Due to copyright issues, we cannot distribute image files in any format to anyone. Instead, we have made available a list of image URLs where you can download the images yourself. We realize that this makes it impossible to exactly compare numbers, as image links will slowly disappear over time, but we have no other option. This seems to be the way other large web-based databases seem to be evolving.</p> -</blockquote> -<p><a href="https://neerajkumar.org/databases/lfpw/">https://neerajkumar.org/databases/lfpw/</a></p> -<blockquote><p>This research was performed at Kriegman-Belhumeur Vision Technologies and was funded by the CIA through the Office of the Chief Scientist. <a href="https://www.cs.cmu.edu/~peiyunh/topdown/">https://www.cs.cmu.edu/~peiyunh/topdown/</a> (nk_cvpr2011_faceparts.pdf)</p> -</blockquote> -</section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file diff --git a/site/public/datasets/market_1501/index.html b/site/public/datasets/market_1501/index.html deleted file mode 100644 index 059b1a49..00000000 --- a/site/public/datasets/market_1501/index.html +++ /dev/null @@ -1,132 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content="Market-1501 is a dataset is collection of CCTV footage from Tsinghua University" /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - <div class='splash'>Market 1501</div> - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-dataset"> - - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/market_1501/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Market-1501</span> is a dataset is collection of CCTV footage from Tsinghua University</span></div><div class='hero_subdesc'><span class='bgpad'>The Market-1501 dataset includes 1,261 people from 5 HD surveillance cameras located on campus -</span></div></div></section><section><div class='left-sidebar'><div class='meta'> - <div class='gray'>Published</div> - <div>2015</div> - </div><div class='meta'> - <div class='gray'>Images</div> - <div>32,668 </div> - </div><div class='meta'> - <div class='gray'>Identities</div> - <div>1,501 </div> - </div><div class='meta'> - <div class='gray'>Purpose</div> - <div>Person re-identification</div> - </div><div class='meta'> - <div class='gray'>Website</div> - <div><a href='http://www.liangzheng.org/Project/project_reid.html' target='_blank' rel='nofollow noopener'>liangzheng.org</a></div> - </div></div><h2>Market-1501 Dataset</h2> -<p>[ PAGE UNDER DEVELOPMENT]</p> -</section><section> - <h3>Who used Market 1501?</h3> - - <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. - </p> - - </section> - -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section> - -<section class="applet_container"> - <div class="applet" data-payload="{"command": "piechart"}"></div> -</section> - -<section> - - <h3>Biometric Trade Routes</h3> - - <p> - To help understand how Market 1501 has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Market 1501 Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. - </p> - - </section> - -<section class="applet_container fullwidth"> - <div class="applet" data-payload="{"command": "map"}"></div> -</section> - -<div class="caption"> - <ul class="map-legend"> - <li class="edu">Academic</li> - <li class="com">Commercial</li> - <li class="gov">Military / Government</li> - </ul> - <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > -</div> - - -<section class="applet_container"> - - <h3>Dataset Citations</h3> - <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section><section><h2>(ignore) research Notes</h2> -<ul> -<li>"MARS is an extension of the Market-1501 dataset. During collection, we placed six near synchronized cameras in the campus of Tsinghua university. There were Five 1,080<em>1920 HD cameras and one 640</em>480 SD camera. MARS consists of 1,261 different pedestrians whom are captured by at least 2 cameras. Given a query tracklet, MARS aims to retrieve tracklets that contain the same ID." - main paper</li> -<li>bbox "0065C1T0002F0016.jpg", "0065" is the ID of the pedestrian. "C1" denotes the first -camera (there are totally 6 cameras). "T0002" means the 2th tracklet. "F016" is the 16th frame -within this tracklet. For the tracklets, their names are accumulated for each ID; but for frames, -they start from "F001" in each tracklet.</li> -</ul> -<p>@proceedings{zheng2016mars, -title={MARS: A Video Benchmark for Large-Scale Person Re-identification}, -author={Zheng, Liang and Bie, Zhi and Sun, Yifan and Wang, Jingdong and Su, Chi and Wang, Shengjin and Tian, Qi}, -booktitle={European Conference on Computer Vision}, -year={2016}, -organization={Springer} -}</p> -</section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file diff --git a/site/public/datasets/pipa/index.html b/site/public/datasets/pipa/index.html deleted file mode 100644 index 7a4fbc0e..00000000 --- a/site/public/datasets/pipa/index.html +++ /dev/null @@ -1,120 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content=" People in Photo Albums (PIPA) is a dataset..." /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - <div class='splash'>PIPA Dataset</div> - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-dataset"> - - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/pipa/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name"> People in Photo Albums (PIPA)</span> is a dataset...</span></div><div class='hero_subdesc'><span class='bgpad'>[ add subdescrition ] -</span></div></div></section><section><div class='left-sidebar'><div class='meta'> - <div class='gray'>Published</div> - <div>2015</div> - </div><div class='meta'> - <div class='gray'>Images</div> - <div>37,107 </div> - </div><div class='meta'> - <div class='gray'>Identities</div> - <div>2,356 </div> - </div><div class='meta'> - <div class='gray'>Purpose</div> - <div>Face recognition</div> - </div><div class='meta'> - <div class='gray'>Download Size</div> - <div>12 GB</div> - </div><div class='meta'> - <div class='gray'>Website</div> - <div><a href='https://people.eecs.berkeley.edu/~nzhang/piper.html' target='_blank' rel='nofollow noopener'>berkeley.edu</a></div> - </div></div><h2>People in Photo Albums</h2> -<p>[ PAGE UNDER DEVELOPMENT ]</p> -</section><section> - <h3>Who used PIPA Dataset?</h3> - - <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. - </p> - - </section> - -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section> - -<section class="applet_container"> - <div class="applet" data-payload="{"command": "piechart"}"></div> -</section> - -<section> - - <h3>Biometric Trade Routes</h3> - - <p> - To help understand how PIPA Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing People in Photo Albums Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. - </p> - - </section> - -<section class="applet_container fullwidth"> - <div class="applet" data-payload="{"command": "map"}"></div> -</section> - -<div class="caption"> - <ul class="map-legend"> - <li class="edu">Academic</li> - <li class="com">Commercial</li> - <li class="gov">Military / Government</li> - </ul> - <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > -</div> - - -<section class="applet_container"> - - <h3>Dataset Citations</h3> - <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file diff --git a/site/public/datasets/pubfig/index.html b/site/public/datasets/pubfig/index.html deleted file mode 100644 index c46eeea3..00000000 --- a/site/public/datasets/pubfig/index.html +++ /dev/null @@ -1,117 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content="PubFig is a dataset..." /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - <div class='splash'>PubFig</div> - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-dataset"> - - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/pubfig/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">PubFig</span> is a dataset...</span></div><div class='hero_subdesc'><span class='bgpad'>[ add subdescrition ] -</span></div></div></section><section><div class='left-sidebar'><div class='meta'> - <div class='gray'>Published</div> - <div>2009</div> - </div><div class='meta'> - <div class='gray'>Images</div> - <div>58,797 </div> - </div><div class='meta'> - <div class='gray'>Identities</div> - <div>200 </div> - </div><div class='meta'> - <div class='gray'>Purpose</div> - <div>mostly names from LFW but includes new names. large variation in pose, lighting, expression, scene, camera, imaging conditions and parameters</div> - </div><div class='meta'> - <div class='gray'>Website</div> - <div><a href='http://www.cs.columbia.edu/CAVE/databases/pubfig/' target='_blank' rel='nofollow noopener'>columbia.edu</a></div> - </div></div><h2>PubFig</h2> -<p>[ PAGE UNDER DEVELOPMENT ]</p> -</section><section> - <h3>Who used PubFig?</h3> - - <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. - </p> - - </section> - -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section> - -<section class="applet_container"> - <div class="applet" data-payload="{"command": "piechart"}"></div> -</section> - -<section> - - <h3>Biometric Trade Routes</h3> - - <p> - To help understand how PubFig has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Public Figures Face Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. - </p> - - </section> - -<section class="applet_container fullwidth"> - <div class="applet" data-payload="{"command": "map"}"></div> -</section> - -<div class="caption"> - <ul class="map-legend"> - <li class="edu">Academic</li> - <li class="com">Commercial</li> - <li class="gov">Military / Government</li> - </ul> - <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > -</div> - - -<section class="applet_container"> - - <h3>Dataset Citations</h3> - <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file diff --git a/site/public/datasets/uccs/index.html b/site/public/datasets/uccs/index.html deleted file mode 100644 index 794e3e69..00000000 --- a/site/public/datasets/uccs/index.html +++ /dev/null @@ -1,255 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content="UnConstrained College Students is a dataset of long-range surveillance photos of students at University of Colorado in Colorado Springs" /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - <div class='splash'>UCCS</div> - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-dataset"> - - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">UnConstrained College Students</span> is a dataset of long-range surveillance photos of students at University of Colorado in Colorado Springs</span></div><div class='hero_subdesc'><span class='bgpad'>The UnConstrained College Students dataset includes 16,149 images and 1,732 identities of subjects on University of Colorado Colorado Springs campus and is used for making face recognition and face detection algorithms -</span></div></div></section><section><div class='left-sidebar'><div class='meta'> - <div class='gray'>Published</div> - <div>2016</div> - </div><div class='meta'> - <div class='gray'>Images</div> - <div>16,149 </div> - </div><div class='meta'> - <div class='gray'>Identities</div> - <div>1,732 </div> - </div><div class='meta'> - <div class='gray'>Purpose</div> - <div>Face recognition, face detection</div> - </div><div class='meta'> - <div class='gray'>Created by</div> - <div>University of Colorado Colorado Springs (US)</div> - </div><div class='meta'> - <div class='gray'>Funded by</div> - <div>ODNI, IARPA, ONR MURI, Amry SBIR, SOCOM SBIR</div> - </div><div class='meta'> - <div class='gray'>Website</div> - <div><a href='http://vast.uccs.edu/Opensetface/' target='_blank' rel='nofollow noopener'>uccs.edu</a></div> - </div></div><h2>UnConstrained College Students</h2> -<p>[ page under development ]</p> -<p>UnConstrained College Students (UCCS) is a dataset of long-range surveillance photos captured at University of Colorado Colorado Springs. According to the authors of two papers associated with the dataset, subjects were "photographed using a long-range high-resolution surveillance camera without their knowledge" <a class="footnote_shim" name="[^funding_uccs]_1"> </a><a href="#[^funding_uccs]" class="footnote" title="Footnote 2">2</a>. To create the dataset, the researchers used a Canon 7D digital camera fitted with a Sigma 800mm telephoto lens and photographed students 150–200m away through their office window. Photos were taken during the morning and afternoon while students were walking to and from classes. The primary uses of this dataset are to train, validate, and build recognition and face detection algorithms for realistic surveillance scenarios.</p> -<p>What makes the UCCS dataset unique is that it includes the highest resolution images of any publicly available face recognition dataset discovered so far (18MP), that it was captured on a campus without consent or awareness using a long-range telephoto lens, and that it was funded by United States defense and intelligence agencies.</p> -<p>Combined funding sources for the creation of the initial and final release of the dataset include ODNI (Office of Director of National Intelligence), IARPA (Intelligence Advance Research Projects Activity), ONR MURI (Office of Naval Research and The Department of Defense Multidisciplinary University Research Initiative), Army SBIR (Small Business Innovation Research), SOCOM SBIR (Special Operations Command and Small Business Innovation Research), and the National Science Foundation. <a class="footnote_shim" name="[^funding_sb]_1"> </a><a href="#[^funding_sb]" class="footnote" title="Footnote 1">1</a> <a class="footnote_shim" name="[^funding_uccs]_2"> </a><a href="#[^funding_uccs]" class="footnote" title="Footnote 2">2</a></p> -<p>In 2017 the UCCS face dataset was used for a defense and intelligence agency funded <a href="http://www.face-recognition-challenge.com/">face recognition challenge</a> at the International Joint Biometrics Conference in Denver, CO. And in 2018 the dataset was used for the <a href="https://erodner.github.io/ial2018eccv/">2nd Unconstrained Face Detection and Open Set Recognition Challenge</a> at the European Computer Vision Conference (ECCV) in Munich, Germany. Additional research projects that have used the UCCS dataset are included below in the list of verified citations.</p> -</section><section> - <h3>Who used UCCS?</h3> - - <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. - </p> - - </section> - -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section> - -<section class="applet_container"> - <div class="applet" data-payload="{"command": "piechart"}"></div> -</section> - -<section> - - <h3>Biometric Trade Routes</h3> - - <p> - To help understand how UCCS has been used around the world by commercial, military, and academic organizations; existing publicly available research citing UnConstrained College Students Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. - </p> - - </section> - -<section class="applet_container fullwidth"> - <div class="applet" data-payload="{"command": "map"}"></div> -</section> - -<div class="caption"> - <ul class="map-legend"> - <li class="edu">Academic</li> - <li class="com">Commercial</li> - <li class="gov">Military / Government</li> - </ul> - <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > -</div> - - -<section class="applet_container"> - - <h3>Dataset Citations</h3> - <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section><section> - - <div class="hr-wave-holder"> - <div class="hr-wave-line hr-wave-line1"></div> - <div class="hr-wave-line hr-wave-line2"></div> - </div> - - <h2>Supplementary Information</h2> - -</section><section><h3>Dates and Times</h3> -<p>The images in UCCS were taken on 18 non-consecutive days during 2012–2013. Analysis of the <a href="assets/uccs_camera_exif.csv">EXIF data</a> embedded in original images reveal that most of the images were taken on Tuesdays, and the most frequent capture time throughout the week was 12:30PM.</p> -</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_exif_plot_days.png' alt=' UCCS photos captured per weekday © megapixels.cc'><div class='caption'> UCCS photos captured per weekday © megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_exif_plot.png' alt=' UCCS photos captured per 10-minute intervals per weekday © megapixels.cc'><div class='caption'> UCCS photos captured per 10-minute intervals per weekday © megapixels.cc</div></div></section><section><div class='columns columns-2'><div class='column'><h4>UCCS photos taken in 2012</h4> -<table> -<thead><tr> -<th>Date</th> -<th>Photos</th> -</tr> -</thead> -<tbody> -<tr> -<td>Feb 23, 2012</td> -<td>132</td> -</tr> -<tr> -<td>March 6, 2012</td> -<td>288</td> -</tr> -<tr> -<td>March 8, 2012</td> -<td>506</td> -</tr> -<tr> -<td>March 13, 2012</td> -<td>160</td> -</tr> -<tr> -<td>March 20, 2012</td> -<td>1,840</td> -</tr> -<tr> -<td>March 22, 2012</td> -<td>445</td> -</tr> -<tr> -<td>April 3, 2012</td> -<td>1,639</td> -</tr> -<tr> -<td>April 12, 2012</td> -<td>14</td> -</tr> -<tr> -<td>April 17, 2012</td> -<td>19</td> -</tr> -<tr> -<td>April 24, 2012</td> -<td>63</td> -</tr> -<tr> -<td>April 25, 2012</td> -<td>11</td> -</tr> -<tr> -<td>April 26, 2012</td> -<td>20</td> -</tr> -</tbody> -</table> -</div><div class='column'><h4>UCCS photos taken in 2013</h4> -<table> -<thead><tr> -<th>Date</th> -<th>Photos</th> -</tr> -</thead> -<tbody> -<tr> -<td>Jan 28, 2013</td> -<td>1,056</td> -</tr> -<tr> -<td>Jan 29, 2013</td> -<td>1,561</td> -</tr> -<tr> -<td>Feb 13, 2013</td> -<td>739</td> -</tr> -<tr> -<td>Feb 19, 2013</td> -<td>723</td> -</tr> -<tr> -<td>Feb 20, 2013</td> -<td>965</td> -</tr> -<tr> -<td>Feb 26, 2013</td> -<td>736</td> -</tr> -</tbody> -</table> -</div></div></section><section><h3>Location</h3> -<p>The location of the camera and subjects can confirmed using several visual cues in the dataset images: the unique pattern of the sidewalk that is only used on the UCCS Pedestrian Spine near the West Lawn, the two UCCS sign poles with matching graphics still visible in Google Street View, the no parking sign and directionality of its arrow, the back of street sign next to it, the slight bend in the sidewalk, the presence of cars passing in the background of the image, and the far wall of the parking garage all match images in the dataset. The <a href="https://www.semanticscholar.org/paper/Large-scale-unconstrained-open-set-face-database-Sapkota-Boult/07fcbae86f7a3ad3ea1cf95178459ee9eaf77cb1">original papers</a> also provides another clue: a <a href="https://www.semanticscholar.org/paper/Large-scale-unconstrained-open-set-face-database-Sapkota-Boult/07fcbae86f7a3ad3ea1cf95178459ee9eaf77cb1/figure/1">picture of the camera</a> inside the office that was used to create the dataset. The window view in this image provides another match for the brick pattern on the north facade of the Kraember Family Library and the green metal fence along the sidewalk. View the <a href="https://www.google.com/maps/place/University+of+Colorado+Colorado+Springs/@38.8934297,-104.7992445,27a,35y,258.51h,75.06t/data=!3m1!1e3!4m5!3m4!1s0x87134fa088fe399d:0x92cadf3962c058c4!8m2!3d38.8968312!4d-104.8049528">location on Google Maps</a></p> -</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_map.jpg' alt=' Location on campus where students were unknowingly photographed with a telephoto lens to be used for defense and intelligence agency funded research on face recognition. Image: Google Maps'><div class='caption'> Location on campus where students were unknowingly photographed with a telephoto lens to be used for defense and intelligence agency funded research on face recognition. Image: Google Maps</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_map_3d.jpg' alt=' 3D view showing the angle of view of the surveillance camera used for UCCS dataset. Image: Google Maps'><div class='caption'> 3D view showing the angle of view of the surveillance camera used for UCCS dataset. Image: Google Maps</div></div></section><section><h3>Funding</h3> -<p>The UnConstrained College Students dataset is associated with two main research papers: "Large Scale Unconstrained Open Set Face Database" and "Unconstrained Face Detection and Open-Set Face Recognition Challenge". Collectively, these papers and the creation of the dataset have received funding from the following organizations:</p> -<ul> -<li>ONR (Office of Naval Research) MURI (The Department of Defense Multidisciplinary University Research Initiative) grant N00014-08-1-0638</li> -<li>Army SBIR (Small Business Innovation Research) grant W15P7T-12-C-A210</li> -<li>SOCOM (Special Operations Command) SBIR (Small Business Innovation Research) grant H92222-07-P-0020</li> -<li>National Science Foundation Grant IIS-1320956</li> -<li>ODNI (Office of Director of National Intelligence)</li> -<li>IARPA (Intelligence Advance Research Projects Activity) R&D contract 2014-14071600012</li> -</ul> -<h3>Opting Out</h3> -<p>If you attended University of Colorado Colorado Springs and were captured by the long range surveillance camera used to create this dataset, there is unfortunately currently no way to be removed. The authors do not provide any options for students to opt-out nor were students informed they would be used for training face recognition. According to the authors, the lack of any consent or knowledge of participation is what provides part of the value of Unconstrained College Students Dataset.</p> -<h3>Ethics</h3> -<p>Please direct any questions about the ethics of the dataset to the University of Colorado Colorado Springs <a href="https://www.uccs.edu/compliance/">Ethics and Compliance Office</a></p> -<h3>Technical Details</h3> -<p>For further technical information about the dataset, visit the <a href="https://vast.uccs.edu/Opensetface">UCCS dataset project page</a>.</p> -<h2>Under Development</h2> -<ul> -<li>adding more verified locations to map and charts</li> -<li>add EXIF file to CDN</li> -</ul> -</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^funding_sb]" class="footnote_shim"></a><span class="backlinks"><a href="#[^funding_sb]_1">a</a></span><p>Sapkota, Archana and Boult, Terrance. "Large Scale Unconstrained Open Set Face Database." 2013.</p> -</li><li><a name="[^funding_uccs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^funding_uccs]_1">a</a><a href="#[^funding_uccs]_2">b</a></span><p>Günther, M. et. al. "Unconstrained Face Detection and Open-Set Face Recognition Challenge," 2018. Arxiv 1708.02337v3.</p> -</li></ul></section></section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file diff --git a/site/public/datasets/vgg_face2/index.html b/site/public/datasets/vgg_face2/index.html deleted file mode 100644 index 321fb203..00000000 --- a/site/public/datasets/vgg_face2/index.html +++ /dev/null @@ -1,142 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content="VGG Face 2 Dataset" /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - <div class='splash'>Brainwash Dataset</div> - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-"> - - <section><div class='left-sidebar'><div class='meta'> - <div class='gray'>Published</div> - <div>2015</div> - </div><div class='meta'> - <div class='gray'>Images</div> - <div>11,917 </div> - </div><div class='meta'> - <div class='gray'>Purpose</div> - <div>Head detection</div> - </div><div class='meta'> - <div class='gray'>Created by</div> - <div>Stanford University (US), Max Planck Institute for Informatics (DE)</div> - </div><div class='meta'> - <div class='gray'>Funded by</div> - <div>Max Planck Center for Visual Computing and Communication</div> - </div><div class='meta'> - <div class='gray'>Download Size</div> - <div>4.1 GB</div> - </div><div class='meta'> - <div class='gray'>Website</div> - <div><a href='https://purl.stanford.edu/sx925dc9385' target='_blank' rel='nofollow noopener'>stanford.edu</a></div> - </div></div><h2>VGG Face 2</h2> -<p>[ page under development ]</p> -</section><section> - <h3>Who used Brainwash Dataset?</h3> - - <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. - </p> - - </section> - -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section> - -<section class="applet_container"> - <div class="applet" data-payload="{"command": "piechart"}"></div> -</section> - -<section> - - <h3>Biometric Trade Routes</h3> - - <p> - To help understand how Brainwash Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Brainwash Dataset was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. - </p> - - </section> - -<section class="applet_container fullwidth"> - <div class="applet" data-payload="{"command": "map"}"></div> -</section> - -<div class="caption"> - <ul class="map-legend"> - <li class="edu">Academic</li> - <li class="com">Commercial</li> - <li class="gov">Military / Government</li> - </ul> - <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > -</div> - - -<section class="applet_container"> - - <h3>Dataset Citations</h3> - <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section><section><h3>(ignore) research notes</h3> -<ul> -<li>The VGG Face 2 dataset includes approximately 1,331 actresses, 139 presidents, 16 wives, 3 husbands, 2 snooker player, and 1 guru</li> -<li>The original VGGF2 name list has been updated with the results returned from Google Knowledge</li> -<li>Names with a similarity score greater than 0.75 where automatically updated. Scores computed using <code>import difflib; seq = difflib.SequenceMatcher(a=a.lower(), b=b.lower()); score = seq.ratio()</code></li> -<li>The 97 names with a score of 0.75 or lower were manually reviewed and includes name changes validating using Wikipedia.org results for names such as "Bruce Jenner" to "Caitlyn Jenner", spousal last-name changes, and discretionary changes to improve search results such as combining nicknames with full name when appropriate, for example changing "Aleksandar Petrović" to "Aleksandar 'Aco' Petrović" and minor changes such as "Mohammad Ali" to "Muhammad Ali"</li> -<li>The 'Description' text was automatically added when the Knowledge Graph score was greater than 250</li> -</ul> -<h2>TODO</h2> -<ul> -<li>create name list, and populate with Knowledge graph information like LFW</li> -<li>make list of interesting number stats, by the numbers</li> -<li>make list of interesting important facts</li> -<li>write intro abstract</li> -<li>write analysis of usage</li> -<li>find examples, citations, and screenshots of useage</li> -<li>find list of companies using it for table</li> -<li>create montages of the dataset, like LFW</li> -<li>create right to removal information</li> -</ul> -</section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file diff --git a/site/public/datasets/viper/index.html b/site/public/datasets/viper/index.html deleted file mode 100644 index ffce01fe..00000000 --- a/site/public/datasets/viper/index.html +++ /dev/null @@ -1,122 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content="VIPeR is a person re-identification dataset of images captured at UC Santa Cruz in 2007" /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - <div class='splash'>VIPeR</div> - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-dataset"> - - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/viper/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">VIPeR</span> is a person re-identification dataset of images captured at UC Santa Cruz in 2007</span></div><div class='hero_subdesc'><span class='bgpad'>VIPeR contains 1,264 images and 632 persons on the UC Santa Cruz campus and is used to train person re-identification algorithms for surveillance -</span></div></div></section><section><div class='left-sidebar'><div class='meta'> - <div class='gray'>Published</div> - <div>2007</div> - </div><div class='meta'> - <div class='gray'>Images</div> - <div>1,264 </div> - </div><div class='meta'> - <div class='gray'>Identities</div> - <div>632 </div> - </div><div class='meta'> - <div class='gray'>Purpose</div> - <div>Person re-identification</div> - </div><div class='meta'> - <div class='gray'>Created by</div> - <div>University of California Santa Cruz</div> - </div><div class='meta'> - <div class='gray'>Website</div> - <div><a href='https://vision.soe.ucsc.edu/node/178' target='_blank' rel='nofollow noopener'>ucsc.edu</a></div> - </div></div><h2>VIPeR Dataset</h2> -<p>[ page under development ]</p> -<p><em>VIPeR (Viewpoint Invariant Pedestrian Recognition)</em> is a dataset of pedestrian images captured at University of California Santa Cruz in 2007. Accoriding to the reserachers 2 "cameras were placed in different locations in an academic setting and subjects were notified of the presence of cameras, but were not coached or instructed in any way."</p> -<p>VIPeR is amongst the most widely used publicly available person re-identification datasets. In 2017 the VIPeR dataset was combined into a larger person re-identification created by the Chinese University of Hong Kong called PETA (PEdesTrian Attribute).</p> -</section><section> - <h3>Who used VIPeR?</h3> - - <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. - </p> - - </section> - -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section> - -<section class="applet_container"> - <div class="applet" data-payload="{"command": "piechart"}"></div> -</section> - -<section> - - <h3>Biometric Trade Routes</h3> - - <p> - To help understand how VIPeR has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Viewpoint Invariant Pedestrian Recognition was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. - </p> - - </section> - -<section class="applet_container fullwidth"> - <div class="applet" data-payload="{"command": "map"}"></div> -</section> - -<div class="caption"> - <ul class="map-legend"> - <li class="edu">Academic</li> - <li class="com">Commercial</li> - <li class="gov">Military / Government</li> - </ul> - <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > -</div> - - -<section class="applet_container"> - - <h3>Dataset Citations</h3> - <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file diff --git a/site/public/datasets/youtube_celebrities/index.html b/site/public/datasets/youtube_celebrities/index.html deleted file mode 100644 index b19add4e..00000000 --- a/site/public/datasets/youtube_celebrities/index.html +++ /dev/null @@ -1,113 +0,0 @@ -<!doctype html> -<html> -<head> - <title>MegaPixels</title> - <meta charset="utf-8" /> - <meta name="author" content="Adam Harvey" /> - <meta name="description" content="YouTube Celebrities" /> - <meta name="referrer" content="no-referrer" /> - <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> - <link rel='stylesheet' href='/assets/css/fonts.css' /> - <link rel='stylesheet' href='/assets/css/css.css' /> - <link rel='stylesheet' href='/assets/css/leaflet.css' /> - <link rel='stylesheet' href='/assets/css/applets.css' /> -</head> -<body> - <header> - <a class='slogan' href="/"> - <div class='logo'></div> - <div class='site_name'>MegaPixels</div> - <div class='splash'>YouTube Celebrities</div> - </a> - <div class='links'> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - </div> - </header> - <div class="content content-"> - - <section><div class='left-sidebar'></div><h2>YouTube Celebrities</h2> -<p>[ page under development ]</p> -</section><section> - <h3>Who used YouTube Celebrities?</h3> - - <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. - </p> - - </section> - -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section> - -<section class="applet_container"> - <div class="applet" data-payload="{"command": "piechart"}"></div> -</section> - -<section> - - <h3>Biometric Trade Routes</h3> - - <p> - To help understand how YouTube Celebrities has been used around the world by commercial, military, and academic organizations; existing publicly available research citing YouTube Celebrities was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. - </p> - - </section> - -<section class="applet_container fullwidth"> - <div class="applet" data-payload="{"command": "map"}"></div> -</section> - -<div class="caption"> - <ul class="map-legend"> - <li class="edu">Academic</li> - <li class="com">Commercial</li> - <li class="gov">Military / Government</li> - </ul> - <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > -</div> - - -<section class="applet_container"> - - <h3>Dataset Citations</h3> - <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section><section><h4>Notes...</h4> -<ul> -<li>Selected dataset sequences: (a) MBGC, (b) CMU MoBo, (c) First -Honda/UCSD, and (d) YouTube Celebrities.</li> -<li>This research is supported by the Central Intelligence Agency, the Biometrics -Task Force and the Technical Support Working Group through US Army contract -W91CRB-08-C-0093. The opinions, (cid:12)ndings, and conclusions or recommendations -expressed in this publication are those of the authors and do not necessarily re(cid:13)ect -the views of our sponsors.</li> -<li>in "Face Recognition From Video Draft 17"</li> -<li>International Journal of Pattern Recognition and Artifcial Intelligence WorldScientific Publishing Company</li> -</ul> -</section> - - </div> - <footer> - <div> - <a href="/">MegaPixels.cc</a> - <a href="/datasets/">Datasets</a> - <a href="/about/">About</a> - <a href="/about/press/">Press</a> - <a href="/about/legal/">Legal and Privacy</a> - </div> - <div> - MegaPixels ©2017-19 Adam R. Harvey / - <a href="https://ahprojects.com">ahprojects.com</a> - </div> - </footer> -</body> - -<script src="/assets/js/dist/index.js"></script> -</html>
\ No newline at end of file |
