diff options
Diffstat (limited to 'site/public/datasets')
| -rw-r--r-- | site/public/datasets/50_people_one_question/index.html | 35 | ||||
| -rw-r--r-- | site/public/datasets/brainwash/index.html | 88 | ||||
| -rw-r--r-- | site/public/datasets/celeba/index.html | 35 | ||||
| -rw-r--r-- | site/public/datasets/cofw/index.html | 40 | ||||
| -rw-r--r-- | site/public/datasets/duke_mtmc/index.html | 64 | ||||
| -rw-r--r-- | site/public/datasets/hrt_transgender/index.html | 49 | ||||
| -rw-r--r-- | site/public/datasets/index.html | 14 | ||||
| -rw-r--r-- | site/public/datasets/lfw/index.html | 49 | ||||
| -rw-r--r-- | site/public/datasets/market_1501/index.html | 35 | ||||
| -rw-r--r-- | site/public/datasets/msceleb/index.html | 136 | ||||
| -rw-r--r-- | site/public/datasets/pipa/index.html | 35 | ||||
| -rw-r--r-- | site/public/datasets/uccs/index.html | 62 | ||||
| -rw-r--r-- | site/public/datasets/viper/index.html | 49 |
13 files changed, 427 insertions, 264 deletions
diff --git a/site/public/datasets/50_people_one_question/index.html b/site/public/datasets/50_people_one_question/index.html index bded7fbd..8e3d2d2b 100644 --- a/site/public/datasets/50_people_one_question/index.html +++ b/site/public/datasets/50_people_one_question/index.html @@ -33,7 +33,7 @@ <p>Nam libero tempore, cum soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptates repudiandae sint et molestiae non-recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat</p> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -45,28 +45,31 @@ </div> --> <p> - To understand how 50 People One Question Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how 50 People One Question Dataset has been used around the world for commercial, military and academic research; publicly available research citing 50 People One Question is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> - <div class="map-legend-item edu">Academic</div> - <div class="map-legend-item com">Industry</div> - <div class="map-legend-item gov">Government</div> - Data is compiled from <a href="https://www.semanticscholar.org">Semantic Scholar</a> and has been manually verified to show usage of 50 People One Question Dataset. + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> + </ul> </div> -<section> +<!-- <section> <p class='subp'> - Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + [section under development] 50 People One Question Dataset ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <div class="hr-wave-holder"> @@ -74,16 +77,16 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train and/or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> - Add button/link to download CSV + Add [button/link] to download CSV. Add search input field to filter. </p> <div class="applet" data-payload="{"command": "citations"}"></div> diff --git a/site/public/datasets/brainwash/index.html b/site/public/datasets/brainwash/index.html index 41484257..c97349aa 100644 --- a/site/public/datasets/brainwash/index.html +++ b/site/public/datasets/brainwash/index.html @@ -4,7 +4,7 @@ <title>MegaPixels</title> <meta charset="utf-8" /> <meta name="author" content="Adam Harvey" /> - <meta name="description" content="Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco" /> + <meta name="description" content="Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco in 2014" /> <meta name="referrer" content="no-referrer" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> <link rel='stylesheet' href='/assets/css/fonts.css' /> @@ -26,36 +26,29 @@ </header> <div class="content content-dataset"> - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco</span></div><div class='hero_subdesc'><span class='bgpad'>The Brainwash dataset includes 11,918 images of "everyday life of a busy downtown cafe" and is used for training head detection algorithms -</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Published</div><div>2015</div></div><div><div class='gray'>Images</div><div>11,918</div></div><div><div class='gray'>Faces</div><div>91,146</div></div><div><div class='gray'>Created by</div><div>Stanford Department of Computer Science</div></div><div><div class='gray'>Funded by</div><div>Max Planck Center for Visual Computing and Communication</div></div><div><div class='gray'>Location</div><div>Brainwash Cafe, San Franscisco</div></div><div><div class='gray'>Purpose</div><div>Training face detection</div></div><div><div class='gray'>Website</div><div><a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">stanford.edu</a></div></div><div><div class='gray'>Paper</div><div><a href="http://arxiv.org/abs/1506.04878">End-to-End People Detection in Crowded Scenes</a></div></div><div><div class='gray'>Explicit Consent</div><div>No</div></div></div></div><h2>Brainwash Dataset</h2> -<p>(PAGE UNDER DEVELOPMENT)</p> -<p><em>Brainwash</em> is a face detection dataset created from the Brainwash Cafe's livecam footage including 11,918 images of "everyday life of a busy downtown cafe<a class="footnote_shim" name="[^readme]_1"> </a><a href="#[^readme]" class="footnote" title="Footnote 1">1</a>". The images are used to develop face detection algorithms for the "challenging task of detecting people in crowded scenes" and tracking them.</p> -<p>Before closing in 2017, Brainwash Cafe was a "cafe and laundromat" located in San Francisco's SoMA district. The cafe published a publicy available livestream from the cafe with a view of the cash register, performance stage, and seating area.</p> -<p>Since it's publication by Stanford in 2015, the Brainwash dataset has appeared in several notable research papers. In September 2016 four researchers from the National University of Defense Technology in Changsha, China used the Brainwash dataset for a research study on "people head detection in crowded scenes", concluding that their algorithm "achieves superior head detection performance on the crowded scenes dataset<a class="footnote_shim" name="[^localized_region_context]_1"> </a><a href="#[^localized_region_context]" class="footnote" title="Footnote 2">2</a>". And again in 2017 three researchers at the National University of Defense Technology used Brainwash for a study on object detection noting "the data set used in our experiment is shown in Table 1, which includes one scene of the brainwash dataset<a class="footnote_shim" name="[^replacement_algorithm]_1"> </a><a href="#[^replacement_algorithm]" class="footnote" title="Footnote 3">3</a>".</p> -</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/00425000_960.jpg' alt=' An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains about 12,000 images. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains about 12,000 images. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_montage.jpg' alt=' 49 of the 11,918 images included in the Brainwash dataset. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> 49 of the 11,918 images included in the Brainwash dataset. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section> + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco in 2014</span></div><div class='hero_subdesc'><span class='bgpad'>The Brainwash dataset includes 11,918 images of "everyday life of a busy downtown cafe" and is used for training head detection surveillance algorithms +</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Published</div><div>2015</div></div><div><div class='gray'>Images</div><div>11,918</div></div><div><div class='gray'>Faces</div><div>91,146</div></div><div><div class='gray'>Created by</div><div>Stanford University (US)<br>Max Planck Institute for Informatics (DE)</div></div><div><div class='gray'>Funded by</div><div>Max Planck Center for Visual Computing and Communication</div></div><div><div class='gray'>Purpose</div><div>Head detection</div></div><div><div class='gray'>Download Size</div><div>4.1GB</div></div><div><div class='gray'>Website</div><div><a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">stanford.edu</a></div></div></div></div><h2>Brainwash Dataset</h2> +<p><em>Brainwash</em> is a head detection dataset created from San Francisco's Brainwash Cafe livecam footage. It includes 11,918 images of "everyday life of a busy downtown cafe"<a class="footnote_shim" name="[^readme]_1"> </a><a href="#[^readme]" class="footnote" title="Footnote 1">1</a> captured at 100 second intervals throught the entire day. Brainwash dataset was captured during 3 days in 2014: October 27, November 13, and November 24. According the author's reserach paper introducing the dataset, the images were acquired with the help of Angelcam.com [cite orig paper].</p> +<p>Brainwash is not a widely used dataset but since its publication by Stanford University in 2015, it has notably appeared in several research papers from the National University of Defense Technology in Changsha, China. In 2016 and in 2017 researchers there conducted studies on detecting people's heads in crowded scenes for the purpose of surveillance <a class="footnote_shim" name="[^localized_region_context]_1"> </a><a href="#[^localized_region_context]" class="footnote" title="Footnote 2">2</a> <a class="footnote_shim" name="[^replacement_algorithm]_1"> </a><a href="#[^replacement_algorithm]" class="footnote" title="Footnote 3">3</a>.</p> +<p>If you happen to have been at Brainwash cafe in San Franscisco at any time on October 26, November 13, or November 24 in 2014 you are most likely included in the Brainwash dataset.</p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_mean_overlay.jpg' alt=' The pixel-averaged image of all Brainwash dataset images is shown with 81,973 head annotations drawn from the Brainwash training partition. (c) Adam Harvey'><div class='caption'> The pixel-averaged image of all Brainwash dataset images is shown with 81,973 head annotations drawn from the Brainwash training partition. (c) Adam Harvey</div></div></section><section> <h3>Who used Brainwash Dataset?</h3> <p> - This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns - to see yearly totals. These charts show at most the top 10 countries. + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. </p> </section> <section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> <div class="applet" data-payload="{"command": "chart"}"></div> -</section><section> - <p> - These pie charts show overall totals based on country and institution type. - </p> - - </section> - -<section class="applet_container"> +</section><section class="applet_container"> <div class="applet" data-payload="{"command": "piechart"}"></div> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -67,28 +60,41 @@ </div> --> <p> - To understand how Brainwash Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how Brainwash Dataset has been used around the world for commercial, military and academic research; publicly available research citing Brainwash Dataset is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> - <div class="map-legend-item edu">Academic</div> - <div class="map-legend-item com">Industry</div> - <div class="map-legend-item gov">Government</div> - Data is compiled from <a href="https://www.semanticscholar.org">Semantic Scholar</a> and has been manually verified to show usage of Brainwash Dataset. + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> + </ul> </div> -<section> +<!-- <section> <p class='subp'> - Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + [section under development] Brainwash Dataset ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section><p>Add more analysis here</p> +</section> + --><section class="applet_container"> + + <h3>Dataset Citations</h3> + <p> + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + </p> + <p> + Add [button/link] to download CSV. Add search input field to filter. + </p> + + <div class="applet" data-payload="{"command": "citations"}"></div> </section><section> @@ -97,23 +103,19 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> -</section><section class="applet_container"> - - <h3>Citations</h3> - <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train and/or test machine learning algorithms. - </p> - <p> - Add button/link to download CSV - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section><section><h3>Additional Information</h3> + <h3>Supplementary Information</h3> + +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/00425000_960.jpg' alt=' An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains about 12,000 images. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains about 12,000 images. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_montage.jpg' alt=' 49 of the 11,918 images included in the Brainwash dataset. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> 49 of the 11,918 images included in the Brainwash dataset. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section><h4>Additional Resources</h4> <ul> <li>The dataset author spoke about his research at the CVPR conference in 2016 <a href="https://www.youtube.com/watch?v=Nl2fBKxwusQ">https://www.youtube.com/watch?v=Nl2fBKxwusQ</a></li> </ul> +<p>TODO</p> +<ul> +<li>add bounding boxes to the header image</li> +<li>remake montage with randomized images, with bboxes</li> +<li>clean up intro text</li> +<li>verify quote citations</li> +</ul> </section><section><ul class="footnotes"><li><a name="[^readme]" class="footnote_shim"></a><span class="backlinks"><a href="#[^readme]_1">a</a></span><p>"readme.txt" <a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">https://exhibits.stanford.edu/data/catalog/sx925dc9385</a>.</p> </li><li><a name="[^localized_region_context]" class="footnote_shim"></a><span class="backlinks"><a href="#[^localized_region_context]_1">a</a></span><p>Li, Y. and Dou, Y. and Liu, X. and Li, T. Localized Region Context and Object Feature Fusion for People Head Detection. ICIP16 Proceedings. 2016. Pages 594-598.</p> </li><li><a name="[^replacement_algorithm]" class="footnote_shim"></a><span class="backlinks"><a href="#[^replacement_algorithm]_1">a</a></span><p>Zhao. X, Wang Y, Dou, Y. A Replacement Algorithm of Non-Maximum Suppression Base on Graph Clustering.</p> diff --git a/site/public/datasets/celeba/index.html b/site/public/datasets/celeba/index.html index 09347f10..e958cbef 100644 --- a/site/public/datasets/celeba/index.html +++ b/site/public/datasets/celeba/index.html @@ -33,7 +33,7 @@ <p>Nam libero tempore, cum soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptates repudiandae sint et molestiae non-recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat</p> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -45,28 +45,31 @@ </div> --> <p> - To understand how CelebA Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how CelebA Dataset has been used around the world for commercial, military and academic research; publicly available research citing Large-scale CelebFaces Attributes Dataset is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> - <div class="map-legend-item edu">Academic</div> - <div class="map-legend-item com">Industry</div> - <div class="map-legend-item gov">Government</div> - Data is compiled from <a href="https://www.semanticscholar.org">Semantic Scholar</a> and has been manually verified to show usage of CelebA Dataset. + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> + </ul> </div> -<section> +<!-- <section> <p class='subp'> - Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + [section under development] CelebA Dataset ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <div class="hr-wave-holder"> @@ -74,16 +77,16 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train and/or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> - Add button/link to download CSV + Add [button/link] to download CSV. Add search input field to filter. </p> <div class="applet" data-payload="{"command": "citations"}"></div> diff --git a/site/public/datasets/cofw/index.html b/site/public/datasets/cofw/index.html index eac1f7a6..7ac30579 100644 --- a/site/public/datasets/cofw/index.html +++ b/site/public/datasets/cofw/index.html @@ -43,7 +43,7 @@ To increase the number of training images, and since COFW has the exact same la <p><a href="https://www.cs.cmu.edu/~peiyunh/topdown/">https://www.cs.cmu.edu/~peiyunh/topdown/</a></p> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -55,28 +55,31 @@ To increase the number of training images, and since COFW has the exact same la </div> --> <p> - To understand how COFW Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how COFW Dataset has been used around the world for commercial, military and academic research; publicly available research citing Caltech Occluded Faces in the Wild is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> - <div class="map-legend-item edu">Academic</div> - <div class="map-legend-item com">Industry</div> - <div class="map-legend-item gov">Government</div> - Data is compiled from <a href="https://www.semanticscholar.org">Semantic Scholar</a> and has been manually verified to show usage of COFW Dataset. + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> + </ul> </div> -<section> +<!-- <section> <p class='subp'> - Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + [section under development] COFW Dataset ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <div class="hr-wave-holder"> @@ -84,16 +87,16 @@ To increase the number of training images, and since COFW has the exact same la <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train and/or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> - Add button/link to download CSV + Add [button/link] to download CSV. Add search input field to filter. </p> <div class="applet" data-payload="{"command": "citations"}"></div> @@ -101,13 +104,14 @@ To increase the number of training images, and since COFW has the exact same la <h3>Who used COFW Dataset?</h3> <p> - This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns - to see yearly totals. These charts show at most the top 10 countries. + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. </p> </section> <section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> <div class="applet" data-payload="{"command": "chart"}"></div> </section><section><p>TODO</p> <h2>- replace graphic</h2> diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html index 299331d7..9664181e 100644 --- a/site/public/datasets/duke_mtmc/index.html +++ b/site/public/datasets/duke_mtmc/index.html @@ -4,7 +4,7 @@ <title>MegaPixels</title> <meta charset="utf-8" /> <meta name="author" content="Adam Harvey" /> - <meta name="description" content="Duke MTMC is a dataset of CCTV footage of students at Duke University" /> + <meta name="description" content="Duke MTMC is a dataset of surveillance camera footage of students on Duke University campus" /> <meta name="referrer" content="no-referrer" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> <link rel='stylesheet' href='/assets/css/fonts.css' /> @@ -26,12 +26,17 @@ </header> <div class="content content-dataset"> - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Duke MTMC</span> is a dataset of CCTV footage of students at Duke University</span></div><div class='hero_subdesc'><span class='bgpad'>Duke MTMC contains over 2 million video frames and 2,000 unique identities collected from 8 cameras at Duke University campus in March 2014 -</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Collected</div><div>March 19, 2014</div></div><div><div class='gray'>Cameras</div><div>8</div></div><div><div class='gray'>Video Frames</div><div>2,000,000</div></div><div><div class='gray'>Identities</div><div>Over 2,000</div></div><div><div class='gray'>Used for</div><div>Person re-identification, <br>face recognition</div></div><div><div class='gray'>Sector</div><div>Academic</div></div><div><div class='gray'>Website</div><div><a href="http://vision.cs.duke.edu/DukeMTMC/">duke.edu</a></div></div></div></div><h2>Duke Multi-Target, Multi-Camera Tracking Dataset (Duke MTMC)</h2> -<p>(PAGE UNDER DEVELOPMENT)</p> + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Duke MTMC</span> is a dataset of surveillance camera footage of students on Duke University campus</span></div><div class='hero_subdesc'><span class='bgpad'>Duke MTMC contains over 2 million video frames and 2,000 unique identities collected from 8 HD cameras at Duke University campus in March 2014 +</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Created</div><div>2014</div></div><div><div class='gray'>Identities</div><div>Over 2,700</div></div><div><div class='gray'>Used for</div><div>Face recognition, person re-identification</div></div><div><div class='gray'>Created by</div><div>Computer Science Department, Duke University, Durham, US</div></div><div><div class='gray'>Website</div><div><a href="http://vision.cs.duke.edu/DukeMTMC/">duke.edu</a></div></div></div></div><h2>Duke Multi-Target, Multi-Camera Tracking Dataset (Duke MTMC)</h2> +<p>[ PAGE UNDER DEVELOPMENT ]</p> +<p>Duke MTMC is a dataset of video recorded on Duke University campus during for the purpose of training, evaluating, and improving <em>multi-target multi-camera tracking</em>. The videos were recorded during February and March 2014 and cinclude</p> +<p>Includes a total of 888.8 minutes of video (ind. verified)</p> +<p>"We make available a new data set that has more than 2 million frames and more than 2,700 identities. It consists of 8×85 minutes of 1080p video recorded at 60 frames per second from 8 static cameras deployed on the Duke University campus during periods between lectures, when pedestrian traffic is heavy."</p> +<p>The dataset includes approximately 2,000 annotated identities appearing in 85 hours of video from 8 cameras located throughout Duke University's campus.</p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_cam5_average_comp.jpg' alt=' Duke MTMC pixel-averaged image of camera #5 is shown with the bounding boxes for each student drawn in white. (c) Adam Harvey'><div class='caption'> Duke MTMC pixel-averaged image of camera #5 is shown with the bounding boxes for each student drawn in white. (c) Adam Harvey</div></div></section><section><p>According to the dataset authors,</p> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -43,47 +48,44 @@ </div> --> <p> - To understand how Duke MTMC Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how Duke MTMC Dataset has been used around the world for commercial, military and academic research; publicly available research citing Duke Multi-Target, Multi-Camera Tracking Project is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> - <div class="map-legend-item edu">Academic</div> - <div class="map-legend-item com">Industry</div> - <div class="map-legend-item gov">Government</div> - Data is compiled from <a href="https://www.semanticscholar.org">Semantic Scholar</a> and has been manually verified to show usage of Duke MTMC Dataset. + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> + </ul> </div> -<section> +<!-- <section> <p class='subp'> - Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + [section under development] Duke MTMC Dataset ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <h3>Who used Duke MTMC Dataset?</h3> <p> - This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns - to see yearly totals. These charts show at most the top 10 countries. + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. </p> </section> <section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> <div class="applet" data-payload="{"command": "chart"}"></div> -</section><section> - <p> - These pie charts show overall totals based on country and institution type. - </p> - - </section> - -<section class="applet_container"> +</section><section class="applet_container"> <div class="applet" data-payload="{"command": "piechart"}"></div> </section><section> @@ -93,21 +95,23 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train and/or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> - Add button/link to download CSV + Add [button/link] to download CSV. Add search input field to filter. </p> <div class="applet" data-payload="{"command": "citations"}"></div> </section><section><h2>Research Notes</h2> <ul> +<li>"We make available a new data set that has more than 2 million frames and more than 2,700 identities. It consists of 8×85 minutes of 1080p video recorded at 60 frames per second from 8 static cameras deployed on the Duke University campus during periods between lectures, when pedestrian traffic is heavy." - 27a2fad58dd8727e280f97036e0d2bc55ef5424c</li> +<li>"This work was supported in part by the EPSRC Programme Grant (FACER2VM) EP/N007743/1, EPSRC/dstl/MURI project EP/R018456/1, the National Natural Science Foundation of China (61373055, 61672265, 61602390, 61532009, 61571313), Chinese Ministry of Education (Z2015101), Science and Technology Department of Sichuan Province (2017RZ0009 and 2017FZ0029), Education Department of Sichuan Province (15ZB0130), the Open Research Fund from Province Key Laboratory of Xihua University (szjj2015-056) and the NVIDIA GPU Grant Program." - ec9c20ed6cce15e9b63ac96bb5a6d55e69661e0b</li> <li>"DukeMTMC aims to accelerate advances in multi-target multi-camera tracking. It provides a tracking system that works within and across cameras, a new large scale HD video data set recorded by 8 synchronized cameras with more than 7,000 single camera trajectories and over 2,000 unique identities, and a new performance evaluation method that measures how often a system is correct about who is where"</li> <li><p>DukeMTMC is a new, manually annotated, calibrated, multi-camera data set recorded outdoors on the Duke University campus with 8 synchronized cameras. It consists of:</p> <p>8 static cameras x 85 minutes of 1080p 60 fps video diff --git a/site/public/datasets/hrt_transgender/index.html b/site/public/datasets/hrt_transgender/index.html index e38e134b..ed36abb5 100644 --- a/site/public/datasets/hrt_transgender/index.html +++ b/site/public/datasets/hrt_transgender/index.html @@ -32,26 +32,20 @@ <h3>Who used HRT Transgender?</h3> <p> - This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns - to see yearly totals. These charts show at most the top 10 countries. + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. </p> </section> <section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> <div class="applet" data-payload="{"command": "chart"}"></div> -</section><section> - <p> - These pie charts show overall totals based on country and institution type. - </p> - - </section> - -<section class="applet_container"> +</section><section class="applet_container"> <div class="applet" data-payload="{"command": "piechart"}"></div> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -63,28 +57,31 @@ </div> --> <p> - To understand how HRT Transgender has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how HRT Transgender has been used around the world for commercial, military and academic research; publicly available research citing HRT Transgender Dataset is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> - <div class="map-legend-item edu">Academic</div> - <div class="map-legend-item com">Industry</div> - <div class="map-legend-item gov">Government</div> - Data is compiled from <a href="https://www.semanticscholar.org">Semantic Scholar</a> and has been manually verified to show usage of HRT Transgender. + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> + </ul> </div> -<section> +<!-- <section> <p class='subp'> - Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + [section under development] HRT Transgender ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <div class="hr-wave-holder"> @@ -92,16 +89,16 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train and/or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> - Add button/link to download CSV + Add [button/link] to download CSV. Add search input field to filter. </p> <div class="applet" data-payload="{"command": "citations"}"></div> diff --git a/site/public/datasets/index.html b/site/public/datasets/index.html index f618e86b..1d2630e1 100644 --- a/site/public/datasets/index.html +++ b/site/public/datasets/index.html @@ -28,7 +28,7 @@ <section><h1>Facial Recognition Datasets</h1> -<h3>Survey</h3> +<p>Explore publicly available facial recognition datasets. More datasets will be added throughout 2019.</p> </section> <section class='applet_container autosize'><div class='applet' data-payload='{"command":"dataset_list"}'></div></section> @@ -85,6 +85,18 @@ </div> </a> + <a href="/datasets/msceleb/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/index.jpg)"> + <div class="dataset"> + <span class='title'>MS Celeb</span> + <div class='fields'> + <div class='year visible'><span>2016</span></div> + <div class='purpose'><span>face recognition</span></div> + <div class='images'><span>1,000,000 images</span></div> + <div class='identities'><span>100,000 </span></div> + </div> + </div> + </a> + <a href="/datasets/pipa/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/pipa/assets/index.jpg)"> <div class="dataset"> <span class='title'>People in Photo Albums</span> diff --git a/site/public/datasets/lfw/index.html b/site/public/datasets/lfw/index.html index b4923877..22384d77 100644 --- a/site/public/datasets/lfw/index.html +++ b/site/public/datasets/lfw/index.html @@ -46,7 +46,7 @@ <p>The <em>Names and Faces</em> dataset was the first face recognition dataset created entire from online photos. However, <em>Names and Faces</em> and <em>LFW</em> are not the first face recognition dataset created entirely "in the wild". That title belongs to the <a href="/datasets/ucd_faces/">UCD dataset</a>. Images obtained "in the wild" means using an image without explicit consent or awareness from the subject or photographer.</p> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -58,47 +58,44 @@ </div> --> <p> - To understand how LFW has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how LFW has been used around the world for commercial, military and academic research; publicly available research citing Labeled Faces in the Wild is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> - <div class="map-legend-item edu">Academic</div> - <div class="map-legend-item com">Industry</div> - <div class="map-legend-item gov">Government</div> - Data is compiled from <a href="https://www.semanticscholar.org">Semantic Scholar</a> and has been manually verified to show usage of LFW. + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> + </ul> </div> -<section> +<!-- <section> <p class='subp'> - Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + [section under development] LFW ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <h3>Who used LFW?</h3> <p> - This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns - to see yearly totals. These charts show at most the top 10 countries. + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. </p> </section> <section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> <div class="applet" data-payload="{"command": "chart"}"></div> -</section><section> - <p> - These pie charts show overall totals based on country and institution type. - </p> - - </section> - -<section class="applet_container"> +</section><section class="applet_container"> <div class="applet" data-payload="{"command": "piechart"}"></div> </section><section> @@ -108,16 +105,16 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train and/or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> - Add button/link to download CSV + Add [button/link] to download CSV. Add search input field to filter. </p> <div class="applet" data-payload="{"command": "citations"}"></div> diff --git a/site/public/datasets/market_1501/index.html b/site/public/datasets/market_1501/index.html index a80c23fa..9a05d20e 100644 --- a/site/public/datasets/market_1501/index.html +++ b/site/public/datasets/market_1501/index.html @@ -31,7 +31,7 @@ <p>(PAGE UNDER DEVELOPMENT)</p> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -43,28 +43,31 @@ </div> --> <p> - To understand how Market 1501 has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how Market 1501 has been used around the world for commercial, military and academic research; publicly available research citing Market 1501 Dataset is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> - <div class="map-legend-item edu">Academic</div> - <div class="map-legend-item com">Industry</div> - <div class="map-legend-item gov">Government</div> - Data is compiled from <a href="https://www.semanticscholar.org">Semantic Scholar</a> and has been manually verified to show usage of Market 1501. + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> + </ul> </div> -<section> +<!-- <section> <p class='subp'> - Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + [section under development] Market 1501 ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <div class="hr-wave-holder"> @@ -72,16 +75,16 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train and/or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> - Add button/link to download CSV + Add [button/link] to download CSV. Add search input field to filter. </p> <div class="applet" data-payload="{"command": "citations"}"></div> diff --git a/site/public/datasets/msceleb/index.html b/site/public/datasets/msceleb/index.html new file mode 100644 index 00000000..0ddf0c68 --- /dev/null +++ b/site/public/datasets/msceleb/index.html @@ -0,0 +1,136 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="MS Celeb is a dataset of web images used for training and evaluating face recognition algorithms" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/tabulator.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + </div> + </header> + <div class="content content-dataset"> + + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>MS Celeb is a dataset of web images used for training and evaluating face recognition algorithms</span></div><div class='hero_subdesc'><span class='bgpad'>The MS Celeb dataset includes over 10,000,000 images and 93,000 identities of semi-public figures collected using the Bing search engine +</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Published</div><div>TBD</div></div><div><div class='gray'>Images</div><div>TBD</div></div><div><div class='gray'>Faces</div><div>TBD</div></div><div><div class='gray'>Created by</div><div>TBD</div></div></div></div><h2>Microsoft Celeb Dataset (MS Celeb)</h2> +<p>(PAGE UNDER DEVELOPMENT)</p> +<p>At vero eos et accusamus et iusto odio dignissimos ducimus, qui blanditiis praesentium voluptatum deleniti atque corrupti, quos dolores et quas molestias excepturi sint, obcaecati cupiditate non-provident, similique sunt in culpa, qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio.</p> +<p>Nam libero tempore, cum soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptates repudiandae sint et molestiae non-recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat</p> +</section><section> + <h3>Who used MsCeleb?</h3> + + <p> + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. + </p> + + </section> + +<section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> + <div class="applet" data-payload="{"command": "chart"}"></div> +</section><section class="applet_container"> + <div class="applet" data-payload="{"command": "piechart"}"></div> +</section><section> + + <h3>Biometric Trade Routes</h3> +<!-- + <div class="map-sidebar right-sidebar"> + <h3>Legend</h3> + <ul> + <li><span style="color: #f2f293">■</span> Industry</li> + <li><span style="color: #f30000">■</span> Academic</li> + <li><span style="color: #3264f6">■</span> Government</li> + </ul> + </div> + --> + <p> + To help understand how MsCeleb has been used around the world for commercial, military and academic research; publicly available research citing Microsoft Celebrity Dataset is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. + </p> + + </section> + +<section class="applet_container fullwidth"> + <div class="applet" data-payload="{"command": "map"}"></div> + +</section> + +<div class="caption"> + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> + </ul> +</div> + +<!-- <section> + <p class='subp'> + [section under development] MsCeleb ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + </p> +</section> + --><section><p>Add more analysis here</p> +</section><section> + + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h3>Supplementary Information</h3> + +</section><section class="applet_container"> + + <h3>Dataset Citations</h3> + <p> + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + </p> + <p> + Add [button/link] to download CSV. Add search input field to filter. + </p> + + <div class="applet" data-payload="{"command": "citations"}"></div> +</section><section><h3>Additional Information</h3> +<ul> +<li>The dataset author spoke about his research at the CVPR conference in 2016 <a href="https://www.youtube.com/watch?v=Nl2fBKxwusQ">https://www.youtube.com/watch?v=Nl2fBKxwusQ</a></li> +</ul> +</section><section><ul class="footnotes"><li><a name="[^readme]" class="footnote_shim"></a><span class="backlinks"></span><p>"readme.txt" <a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">https://exhibits.stanford.edu/data/catalog/sx925dc9385</a>.</p> +</li><li><a name="[^localized_region_context]" class="footnote_shim"></a><span class="backlinks"></span><p>Li, Y. and Dou, Y. and Liu, X. and Li, T. Localized Region Context and Object Feature Fusion for People Head Detection. ICIP16 Proceedings. 2016. Pages 594-598.</p> +</li><li><a name="[^replacement_algorithm]" class="footnote_shim"></a><span class="backlinks"></span><p>Zhao. X, Wang Y, Dou, Y. A Replacement Algorithm of Non-Maximum Suppression Base on Graph Clustering.</p> +</li></ul></section> + + </div> + <footer> + <div> + <a href="/">MegaPixels.cc</a> + <a href="/about/disclaimer/">Disclaimer</a> + <a href="/about/terms/">Terms of Use</a> + <a href="/about/privacy/">Privacy</a> + <a href="/about/">About</a> + <a href="/about/team/">Team</a> + </div> + <div> + MegaPixels ©2017-19 Adam R. Harvey / + <a href="https://ahprojects.com">ahprojects.com</a> + </div> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/datasets/pipa/index.html b/site/public/datasets/pipa/index.html index 62754070..9e7eb164 100644 --- a/site/public/datasets/pipa/index.html +++ b/site/public/datasets/pipa/index.html @@ -31,7 +31,7 @@ <p>(PAGE UNDER DEVELOPMENT)</p> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -43,28 +43,31 @@ </div> --> <p> - To understand how PIPA Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how PIPA Dataset has been used around the world for commercial, military and academic research; publicly available research citing People in Photo Albums Dataset is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> - <div class="map-legend-item edu">Academic</div> - <div class="map-legend-item com">Industry</div> - <div class="map-legend-item gov">Government</div> - Data is compiled from <a href="https://www.semanticscholar.org">Semantic Scholar</a> and has been manually verified to show usage of PIPA Dataset. + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> + </ul> </div> -<section> +<!-- <section> <p class='subp'> - Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + [section under development] PIPA Dataset ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <div class="hr-wave-holder"> @@ -72,16 +75,16 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train and/or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> - Add button/link to download CSV + Add [button/link] to download CSV. Add search input field to filter. </p> <div class="applet" data-payload="{"command": "citations"}"></div> diff --git a/site/public/datasets/uccs/index.html b/site/public/datasets/uccs/index.html index 08000c6e..2477c9f8 100644 --- a/site/public/datasets/uccs/index.html +++ b/site/public/datasets/uccs/index.html @@ -4,7 +4,7 @@ <title>MegaPixels</title> <meta charset="utf-8" /> <meta name="author" content="Adam Harvey" /> - <meta name="description" content="Unconstrained College Students (UCCS) is a dataset of images ..." /> + <meta name="description" content="Unconstrained College Students (UCCS) is a dataset of long-range surveillance photos of students taken without their knowledge" /> <meta name="referrer" content="no-referrer" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> <link rel='stylesheet' href='/assets/css/fonts.css' /> @@ -26,12 +26,12 @@ </header> <div class="content content-dataset"> - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Unconstrained College Students (UCCS)</span> is a dataset of images ...</span></div><div class='hero_subdesc'><span class='bgpad'>The UCCS dataset includes ... -</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Collected</div><div>TBD</div></div><div><div class='gray'>Published</div><div>TBD</div></div><div><div class='gray'>Images</div><div>TBD</div></div><div><div class='gray'>Faces</div><div>TBD</div></div></div></div><h2>Unconstrained College Students ...</h2> + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Unconstrained College Students (UCCS)</span> is a dataset of long-range surveillance photos of students taken without their knowledge</span></div><div class='hero_subdesc'><span class='bgpad'>The UCCS dataset includes 16,149 images and 1,732 identities of students at University of Colorado Colorado Springs campus and is used for face recognition and face detection +</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Published</div><div>2018</div></div><div><div class='gray'>Images</div><div>16,149</div></div><div><div class='gray'>Identities</div><div>1,732</div></div><div><div class='gray'>Used for</div><div>Face recognition, face detection</div></div><div><div class='gray'>Created by</div><div>Unviversity of Colorado Colorado Springs (US)</div></div><div><div class='gray'>Funded by</div><div>ODNI, IARPA, ONR MURI, Amry SBIR, SOCOM SBIR</div></div><div><div class='gray'>Website</div><div><a href="https://vast.uccs.edu/Opensetface/">vast.uccs.edu</a></div></div></div></div><h2>Unconstrained College Students ...</h2> <p>(PAGE UNDER DEVELOPMENT)</p> -</section><section> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_mean_bboxes_comp.jpg' alt=' The pixel-average of all Uconstrained College Students images is shown with all 51,838 face annotations. (c) Adam Harvey'><div class='caption'> The pixel-average of all Uconstrained College Students images is shown with all 51,838 face annotations. (c) Adam Harvey</div></div></section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -43,47 +43,44 @@ </div> --> <p> - To understand how UCCS has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how UCCS has been used around the world for commercial, military and academic research; publicly available research citing UnConstrained College Students Dataset is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> - <div class="map-legend-item edu">Academic</div> - <div class="map-legend-item com">Industry</div> - <div class="map-legend-item gov">Government</div> - Data is compiled from <a href="https://www.semanticscholar.org">Semantic Scholar</a> and has been manually verified to show usage of UCCS. + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> + </ul> </div> -<section> +<!-- <section> <p class='subp'> - Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + [section under development] UCCS ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <h3>Who used UCCS?</h3> <p> - This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns - to see yearly totals. These charts show at most the top 10 countries. + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. </p> </section> <section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> <div class="applet" data-payload="{"command": "chart"}"></div> -</section><section> - <p> - These pie charts show overall totals based on country and institution type. - </p> - - </section> - -<section class="applet_container"> +</section><section class="applet_container"> <div class="applet" data-payload="{"command": "piechart"}"></div> </section><section> @@ -93,16 +90,16 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train and/or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> - Add button/link to download CSV + Add [button/link] to download CSV. Add search input field to filter. </p> <div class="applet" data-payload="{"command": "citations"}"></div> @@ -119,6 +116,11 @@ <li>ODNI (Office of Director of National Intelligence)</li> <li>IARPA (Intelligence Advance Research Projects Activity) R&D contract 2014-14071600012</li> </ul> +<p>" In most face detection/recognition datasets, the majority of images are “posed”, i.e. the subjects know they are being photographed, and/or the images are selected for publication in public media. Hence, blurry, occluded and badly illuminated images are generally uncommon in these datasets. In addition, most of these challenges are close-set, i.e. the list of subjects in the gallery is the same as the one used for testing.</p> +<p>This challenge explores more unconstrained data, by introducing the new UnConstrained College Students (UCCS) dataset, where subjects are photographed using a long-range high-resolution surveillance camera without their knowledge. Faces inside these images are of various poses, and varied levels of blurriness and occlusion. The challenge also creates an open set recognition problem, where unknown people will be seen during testing and must be rejected.</p> +<p>With this challenge, we hope to foster face detection and recognition research towards surveillance applications that are becoming more popular and more required nowadays, and where no automatic recognition algorithm has proven to be useful yet.</p> +<p>UnConstrained College Students (UCCS) Dataset</p> +<p>The UCCS dataset was collected over several months using Canon 7D camera fitted with Sigma 800mm F5.6 EX APO DG HSM lens, taking images at one frame per second, during times when many students were walking on the sidewalk. "</p> <div class="footnotes"> <hr> <ol><li id="fn-funding_sb"><p>Sapkota, Archana and Boult, Terrance. "Large Scale Unconstrained Open Set Face Database." 2013.<a href="#fnref-funding_sb" class="footnote">↩</a></p></li> diff --git a/site/public/datasets/viper/index.html b/site/public/datasets/viper/index.html index 5acd0845..e94568a3 100644 --- a/site/public/datasets/viper/index.html +++ b/site/public/datasets/viper/index.html @@ -35,26 +35,20 @@ <h3>Who used VIPeR?</h3> <p> - This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns - to see yearly totals. These charts show at most the top 10 countries. + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. </p> </section> <section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> <div class="applet" data-payload="{"command": "chart"}"></div> -</section><section> - <p> - These pie charts show overall totals based on country and institution type. - </p> - - </section> - -<section class="applet_container"> +</section><section class="applet_container"> <div class="applet" data-payload="{"command": "piechart"}"></div> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -66,28 +60,31 @@ </div> --> <p> - To understand how VIPeR has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how VIPeR has been used around the world for commercial, military and academic research; publicly available research citing Viewpoint Invariant Pedestrian Recognition is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> - <div class="map-legend-item edu">Academic</div> - <div class="map-legend-item com">Industry</div> - <div class="map-legend-item gov">Government</div> - Data is compiled from <a href="https://www.semanticscholar.org">Semantic Scholar</a> and has been manually verified to show usage of VIPeR. + <ul class="map-legend"> + <li class="edu">Academic</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> + <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> + </ul> </div> -<section> +<!-- <section> <p class='subp'> - Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + [section under development] VIPeR ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <div class="hr-wave-holder"> @@ -95,16 +92,16 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train and/or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> - Add button/link to download CSV + Add [button/link] to download CSV. Add search input field to filter. </p> <div class="applet" data-payload="{"command": "citations"}"></div> |
