diff options
Diffstat (limited to 'site/public/datasets')
| -rw-r--r-- | site/public/datasets/50_people_one_question/index.html | 25 | ||||
| -rw-r--r-- | site/public/datasets/brainwash/index.html | 78 | ||||
| -rw-r--r-- | site/public/datasets/celeba/index.html | 25 | ||||
| -rw-r--r-- | site/public/datasets/cofw/index.html | 28 | ||||
| -rw-r--r-- | site/public/datasets/duke_mtmc/index.html | 50 | ||||
| -rw-r--r-- | site/public/datasets/hrt_transgender/index.html | 37 | ||||
| -rw-r--r-- | site/public/datasets/lfw/index.html | 37 | ||||
| -rw-r--r-- | site/public/datasets/market_1501/index.html | 25 | ||||
| -rw-r--r-- | site/public/datasets/msceleb/index.html | 37 | ||||
| -rw-r--r-- | site/public/datasets/pipa/index.html | 25 | ||||
| -rw-r--r-- | site/public/datasets/uccs/index.html | 52 | ||||
| -rw-r--r-- | site/public/datasets/viper/index.html | 37 |
12 files changed, 213 insertions, 243 deletions
diff --git a/site/public/datasets/50_people_one_question/index.html b/site/public/datasets/50_people_one_question/index.html index 988ce2dc..8e3d2d2b 100644 --- a/site/public/datasets/50_people_one_question/index.html +++ b/site/public/datasets/50_people_one_question/index.html @@ -33,7 +33,7 @@ <p>Nam libero tempore, cum soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptates repudiandae sint et molestiae non-recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat</p> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -45,30 +45,31 @@ </div> --> <p> - To understand how 50 People One Question Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how 50 People One Question Dataset has been used around the world for commercial, military and academic research; publicly available research citing 50 People One Question is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> <ul class="map-legend"> <li class="edu">Academic</li> - <li class="com">Industry</li> - <li class="gov">Government / Military</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> </ul> </div> -<section> +<!-- <section> <p class='subp'> [section under development] 50 People One Question Dataset ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <div class="hr-wave-holder"> @@ -76,13 +77,13 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> Add [button/link] to download CSV. Add search input field to filter. diff --git a/site/public/datasets/brainwash/index.html b/site/public/datasets/brainwash/index.html index 20f2f096..c97349aa 100644 --- a/site/public/datasets/brainwash/index.html +++ b/site/public/datasets/brainwash/index.html @@ -4,7 +4,7 @@ <title>MegaPixels</title> <meta charset="utf-8" /> <meta name="author" content="Adam Harvey" /> - <meta name="description" content="Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco" /> + <meta name="description" content="Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco in 2014" /> <meta name="referrer" content="no-referrer" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> <link rel='stylesheet' href='/assets/css/fonts.css' /> @@ -26,18 +26,16 @@ </header> <div class="content content-dataset"> - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco</span></div><div class='hero_subdesc'><span class='bgpad'>The Brainwash dataset includes 11,918 images of "everyday life of a busy downtown cafe" and is used for training head detection algorithms -</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Published</div><div>2015</div></div><div><div class='gray'>Images</div><div>11,918</div></div><div><div class='gray'>Faces</div><div>91,146</div></div><div><div class='gray'>Created by</div><div>Stanford Department of Computer Science</div></div><div><div class='gray'>Funded by</div><div>Max Planck Center for Visual Computing and Communication</div></div><div><div class='gray'>Location</div><div>Brainwash Cafe, San Franscisco</div></div><div><div class='gray'>Purpose</div><div>Training face detection</div></div><div><div class='gray'>Website</div><div><a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">stanford.edu</a></div></div><div><div class='gray'>Paper</div><div><a href="http://arxiv.org/abs/1506.04878">End-to-End People Detection in Crowded Scenes</a></div></div><div><div class='gray'>Explicit Consent</div><div>No</div></div></div></div><h2>Brainwash Dataset</h2> -<p>(PAGE UNDER DEVELOPMENT)</p> -<p><em>Brainwash</em> is a face detection dataset created from the Brainwash Cafe's livecam footage including 11,918 images of "everyday life of a busy downtown cafe<a class="footnote_shim" name="[^readme]_1"> </a><a href="#[^readme]" class="footnote" title="Footnote 1">1</a>". The images are used to develop face detection algorithms for the "challenging task of detecting people in crowded scenes" and tracking them.</p> -<p>Before closing in 2017, Brainwash Cafe was a "cafe and laundromat" located in San Francisco's SoMA district. The cafe published a publicy available livestream from the cafe with a view of the cash register, performance stage, and seating area.</p> -<p>Since it's publication by Stanford in 2015, the Brainwash dataset has appeared in several notable research papers. In September 2016 four researchers from the National University of Defense Technology in Changsha, China used the Brainwash dataset for a research study on "people head detection in crowded scenes", concluding that their algorithm "achieves superior head detection performance on the crowded scenes dataset<a class="footnote_shim" name="[^localized_region_context]_1"> </a><a href="#[^localized_region_context]" class="footnote" title="Footnote 2">2</a>". And again in 2017 three researchers at the National University of Defense Technology used Brainwash for a study on object detection noting "the data set used in our experiment is shown in Table 1, which includes one scene of the brainwash dataset<a class="footnote_shim" name="[^replacement_algorithm]_1"> </a><a href="#[^replacement_algorithm]" class="footnote" title="Footnote 3">3</a>".</p> -</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/00425000_960.jpg' alt=' An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains about 12,000 images. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains about 12,000 images. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_montage.jpg' alt=' 49 of the 11,918 images included in the Brainwash dataset. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> 49 of the 11,918 images included in the Brainwash dataset. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section> + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco in 2014</span></div><div class='hero_subdesc'><span class='bgpad'>The Brainwash dataset includes 11,918 images of "everyday life of a busy downtown cafe" and is used for training head detection surveillance algorithms +</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Published</div><div>2015</div></div><div><div class='gray'>Images</div><div>11,918</div></div><div><div class='gray'>Faces</div><div>91,146</div></div><div><div class='gray'>Created by</div><div>Stanford University (US)<br>Max Planck Institute for Informatics (DE)</div></div><div><div class='gray'>Funded by</div><div>Max Planck Center for Visual Computing and Communication</div></div><div><div class='gray'>Purpose</div><div>Head detection</div></div><div><div class='gray'>Download Size</div><div>4.1GB</div></div><div><div class='gray'>Website</div><div><a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">stanford.edu</a></div></div></div></div><h2>Brainwash Dataset</h2> +<p><em>Brainwash</em> is a head detection dataset created from San Francisco's Brainwash Cafe livecam footage. It includes 11,918 images of "everyday life of a busy downtown cafe"<a class="footnote_shim" name="[^readme]_1"> </a><a href="#[^readme]" class="footnote" title="Footnote 1">1</a> captured at 100 second intervals throught the entire day. Brainwash dataset was captured during 3 days in 2014: October 27, November 13, and November 24. According the author's reserach paper introducing the dataset, the images were acquired with the help of Angelcam.com [cite orig paper].</p> +<p>Brainwash is not a widely used dataset but since its publication by Stanford University in 2015, it has notably appeared in several research papers from the National University of Defense Technology in Changsha, China. In 2016 and in 2017 researchers there conducted studies on detecting people's heads in crowded scenes for the purpose of surveillance <a class="footnote_shim" name="[^localized_region_context]_1"> </a><a href="#[^localized_region_context]" class="footnote" title="Footnote 2">2</a> <a class="footnote_shim" name="[^replacement_algorithm]_1"> </a><a href="#[^replacement_algorithm]" class="footnote" title="Footnote 3">3</a>.</p> +<p>If you happen to have been at Brainwash cafe in San Franscisco at any time on October 26, November 13, or November 24 in 2014 you are most likely included in the Brainwash dataset.</p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_mean_overlay.jpg' alt=' The pixel-averaged image of all Brainwash dataset images is shown with 81,973 head annotations drawn from the Brainwash training partition. (c) Adam Harvey'><div class='caption'> The pixel-averaged image of all Brainwash dataset images is shown with 81,973 head annotations drawn from the Brainwash training partition. (c) Adam Harvey</div></div></section><section> <h3>Who used Brainwash Dataset?</h3> <p> - This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns - to see yearly totals. These charts show at most the top 10 countries. + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. </p> </section> @@ -46,18 +44,11 @@ <!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> </div> --> <div class="applet" data-payload="{"command": "chart"}"></div> -</section><section> - <p> - These pie charts show overall totals based on country and institution type. - </p> - - </section> - -<section class="applet_container"> +</section><section class="applet_container"> <div class="applet" data-payload="{"command": "piechart"}"></div> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -69,55 +60,62 @@ </div> --> <p> - To understand how Brainwash Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how Brainwash Dataset has been used around the world for commercial, military and academic research; publicly available research citing Brainwash Dataset is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> <ul class="map-legend"> <li class="edu">Academic</li> - <li class="com">Industry</li> - <li class="gov">Government / Military</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> </ul> </div> -<section> +<!-- <section> <p class='subp'> [section under development] Brainwash Dataset ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section><p>Add more analysis here</p> -</section><section> - - - <div class="hr-wave-holder"> - <div class="hr-wave-line hr-wave-line1"></div> - <div class="hr-wave-line hr-wave-line2"></div> - </div> - - <h2>Supplementary Information</h2> -</section><section class="applet_container"> +</section> + --><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> Add [button/link] to download CSV. Add search input field to filter. </p> <div class="applet" data-payload="{"command": "citations"}"></div> -</section><section><h3>Additional Information</h3> +</section><section> + + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h3>Supplementary Information</h3> + +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/00425000_960.jpg' alt=' An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains about 12,000 images. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> An sample image from the Brainwash dataset used for training face and head detection algorithms for surveillance. The datset contains about 12,000 images. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/brainwash_montage.jpg' alt=' 49 of the 11,918 images included in the Brainwash dataset. License: Open Data Commons Public Domain Dedication (PDDL)'><div class='caption'> 49 of the 11,918 images included in the Brainwash dataset. License: Open Data Commons Public Domain Dedication (PDDL)</div></div></section><section><h4>Additional Resources</h4> <ul> <li>The dataset author spoke about his research at the CVPR conference in 2016 <a href="https://www.youtube.com/watch?v=Nl2fBKxwusQ">https://www.youtube.com/watch?v=Nl2fBKxwusQ</a></li> </ul> +<p>TODO</p> +<ul> +<li>add bounding boxes to the header image</li> +<li>remake montage with randomized images, with bboxes</li> +<li>clean up intro text</li> +<li>verify quote citations</li> +</ul> </section><section><ul class="footnotes"><li><a name="[^readme]" class="footnote_shim"></a><span class="backlinks"><a href="#[^readme]_1">a</a></span><p>"readme.txt" <a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">https://exhibits.stanford.edu/data/catalog/sx925dc9385</a>.</p> </li><li><a name="[^localized_region_context]" class="footnote_shim"></a><span class="backlinks"><a href="#[^localized_region_context]_1">a</a></span><p>Li, Y. and Dou, Y. and Liu, X. and Li, T. Localized Region Context and Object Feature Fusion for People Head Detection. ICIP16 Proceedings. 2016. Pages 594-598.</p> </li><li><a name="[^replacement_algorithm]" class="footnote_shim"></a><span class="backlinks"><a href="#[^replacement_algorithm]_1">a</a></span><p>Zhao. X, Wang Y, Dou, Y. A Replacement Algorithm of Non-Maximum Suppression Base on Graph Clustering.</p> diff --git a/site/public/datasets/celeba/index.html b/site/public/datasets/celeba/index.html index 07522561..e958cbef 100644 --- a/site/public/datasets/celeba/index.html +++ b/site/public/datasets/celeba/index.html @@ -33,7 +33,7 @@ <p>Nam libero tempore, cum soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptates repudiandae sint et molestiae non-recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat</p> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -45,30 +45,31 @@ </div> --> <p> - To understand how CelebA Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how CelebA Dataset has been used around the world for commercial, military and academic research; publicly available research citing Large-scale CelebFaces Attributes Dataset is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> <ul class="map-legend"> <li class="edu">Academic</li> - <li class="com">Industry</li> - <li class="gov">Government / Military</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> </ul> </div> -<section> +<!-- <section> <p class='subp'> [section under development] CelebA Dataset ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <div class="hr-wave-holder"> @@ -76,13 +77,13 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> Add [button/link] to download CSV. Add search input field to filter. diff --git a/site/public/datasets/cofw/index.html b/site/public/datasets/cofw/index.html index 99d4a9ef..7ac30579 100644 --- a/site/public/datasets/cofw/index.html +++ b/site/public/datasets/cofw/index.html @@ -43,7 +43,7 @@ To increase the number of training images, and since COFW has the exact same la <p><a href="https://www.cs.cmu.edu/~peiyunh/topdown/">https://www.cs.cmu.edu/~peiyunh/topdown/</a></p> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -55,30 +55,31 @@ To increase the number of training images, and since COFW has the exact same la </div> --> <p> - To understand how COFW Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how COFW Dataset has been used around the world for commercial, military and academic research; publicly available research citing Caltech Occluded Faces in the Wild is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> <ul class="map-legend"> <li class="edu">Academic</li> - <li class="com">Industry</li> - <li class="gov">Government / Military</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> </ul> </div> -<section> +<!-- <section> <p class='subp'> [section under development] COFW Dataset ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <div class="hr-wave-holder"> @@ -86,13 +87,13 @@ To increase the number of training images, and since COFW has the exact same la <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> Add [button/link] to download CSV. Add search input field to filter. @@ -103,8 +104,7 @@ To increase the number of training images, and since COFW has the exact same la <h3>Who used COFW Dataset?</h3> <p> - This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns - to see yearly totals. These charts show at most the top 10 countries. + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. </p> </section> diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html index 431cf7ff..9664181e 100644 --- a/site/public/datasets/duke_mtmc/index.html +++ b/site/public/datasets/duke_mtmc/index.html @@ -4,7 +4,7 @@ <title>MegaPixels</title> <meta charset="utf-8" /> <meta name="author" content="Adam Harvey" /> - <meta name="description" content="Duke MTMC is a dataset of CCTV footage of students at Duke University" /> + <meta name="description" content="Duke MTMC is a dataset of surveillance camera footage of students on Duke University campus" /> <meta name="referrer" content="no-referrer" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> <link rel='stylesheet' href='/assets/css/fonts.css' /> @@ -26,12 +26,17 @@ </header> <div class="content content-dataset"> - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Duke MTMC</span> is a dataset of CCTV footage of students at Duke University</span></div><div class='hero_subdesc'><span class='bgpad'>Duke MTMC contains over 2 million video frames and 2,000 unique identities collected from 8 cameras at Duke University campus in March 2014 -</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Collected</div><div>March 19, 2014</div></div><div><div class='gray'>Cameras</div><div>8</div></div><div><div class='gray'>Video Frames</div><div>2,000,000</div></div><div><div class='gray'>Identities</div><div>Over 2,000</div></div><div><div class='gray'>Used for</div><div>Person re-identification, <br>face recognition</div></div><div><div class='gray'>Sector</div><div>Academic</div></div><div><div class='gray'>Website</div><div><a href="http://vision.cs.duke.edu/DukeMTMC/">duke.edu</a></div></div></div></div><h2>Duke Multi-Target, Multi-Camera Tracking Dataset (Duke MTMC)</h2> -<p>(PAGE UNDER DEVELOPMENT)</p> + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Duke MTMC</span> is a dataset of surveillance camera footage of students on Duke University campus</span></div><div class='hero_subdesc'><span class='bgpad'>Duke MTMC contains over 2 million video frames and 2,000 unique identities collected from 8 HD cameras at Duke University campus in March 2014 +</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Created</div><div>2014</div></div><div><div class='gray'>Identities</div><div>Over 2,700</div></div><div><div class='gray'>Used for</div><div>Face recognition, person re-identification</div></div><div><div class='gray'>Created by</div><div>Computer Science Department, Duke University, Durham, US</div></div><div><div class='gray'>Website</div><div><a href="http://vision.cs.duke.edu/DukeMTMC/">duke.edu</a></div></div></div></div><h2>Duke Multi-Target, Multi-Camera Tracking Dataset (Duke MTMC)</h2> +<p>[ PAGE UNDER DEVELOPMENT ]</p> +<p>Duke MTMC is a dataset of video recorded on Duke University campus during for the purpose of training, evaluating, and improving <em>multi-target multi-camera tracking</em>. The videos were recorded during February and March 2014 and cinclude</p> +<p>Includes a total of 888.8 minutes of video (ind. verified)</p> +<p>"We make available a new data set that has more than 2 million frames and more than 2,700 identities. It consists of 8×85 minutes of 1080p video recorded at 60 frames per second from 8 static cameras deployed on the Duke University campus during periods between lectures, when pedestrian traffic is heavy."</p> +<p>The dataset includes approximately 2,000 annotated identities appearing in 85 hours of video from 8 cameras located throughout Duke University's campus.</p> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_cam5_average_comp.jpg' alt=' Duke MTMC pixel-averaged image of camera #5 is shown with the bounding boxes for each student drawn in white. (c) Adam Harvey'><div class='caption'> Duke MTMC pixel-averaged image of camera #5 is shown with the bounding boxes for each student drawn in white. (c) Adam Harvey</div></div></section><section><p>According to the dataset authors,</p> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -43,35 +48,35 @@ </div> --> <p> - To understand how Duke MTMC Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how Duke MTMC Dataset has been used around the world for commercial, military and academic research; publicly available research citing Duke Multi-Target, Multi-Camera Tracking Project is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> <ul class="map-legend"> <li class="edu">Academic</li> - <li class="com">Industry</li> - <li class="gov">Government / Military</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> </ul> </div> -<section> +<!-- <section> <p class='subp'> [section under development] Duke MTMC Dataset ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <h3>Who used Duke MTMC Dataset?</h3> <p> - This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns - to see yearly totals. These charts show at most the top 10 countries. + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. </p> </section> @@ -80,14 +85,7 @@ <!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> </div> --> <div class="applet" data-payload="{"command": "chart"}"></div> -</section><section> - <p> - These pie charts show overall totals based on country and institution type. - </p> - - </section> - -<section class="applet_container"> +</section><section class="applet_container"> <div class="applet" data-payload="{"command": "piechart"}"></div> </section><section> @@ -97,13 +95,13 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> Add [button/link] to download CSV. Add search input field to filter. diff --git a/site/public/datasets/hrt_transgender/index.html b/site/public/datasets/hrt_transgender/index.html index 7e10c2fb..ed36abb5 100644 --- a/site/public/datasets/hrt_transgender/index.html +++ b/site/public/datasets/hrt_transgender/index.html @@ -32,8 +32,7 @@ <h3>Who used HRT Transgender?</h3> <p> - This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns - to see yearly totals. These charts show at most the top 10 countries. + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. </p> </section> @@ -42,18 +41,11 @@ <!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> </div> --> <div class="applet" data-payload="{"command": "chart"}"></div> -</section><section> - <p> - These pie charts show overall totals based on country and institution type. - </p> - - </section> - -<section class="applet_container"> +</section><section class="applet_container"> <div class="applet" data-payload="{"command": "piechart"}"></div> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -65,30 +57,31 @@ </div> --> <p> - To understand how HRT Transgender has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how HRT Transgender has been used around the world for commercial, military and academic research; publicly available research citing HRT Transgender Dataset is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> <ul class="map-legend"> <li class="edu">Academic</li> - <li class="com">Industry</li> - <li class="gov">Government / Military</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> </ul> </div> -<section> +<!-- <section> <p class='subp'> [section under development] HRT Transgender ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <div class="hr-wave-holder"> @@ -96,13 +89,13 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> Add [button/link] to download CSV. Add search input field to filter. diff --git a/site/public/datasets/lfw/index.html b/site/public/datasets/lfw/index.html index 9cbf2e11..22384d77 100644 --- a/site/public/datasets/lfw/index.html +++ b/site/public/datasets/lfw/index.html @@ -46,7 +46,7 @@ <p>The <em>Names and Faces</em> dataset was the first face recognition dataset created entire from online photos. However, <em>Names and Faces</em> and <em>LFW</em> are not the first face recognition dataset created entirely "in the wild". That title belongs to the <a href="/datasets/ucd_faces/">UCD dataset</a>. Images obtained "in the wild" means using an image without explicit consent or awareness from the subject or photographer.</p> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -58,35 +58,35 @@ </div> --> <p> - To understand how LFW has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how LFW has been used around the world for commercial, military and academic research; publicly available research citing Labeled Faces in the Wild is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> <ul class="map-legend"> <li class="edu">Academic</li> - <li class="com">Industry</li> - <li class="gov">Government / Military</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> </ul> </div> -<section> +<!-- <section> <p class='subp'> [section under development] LFW ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <h3>Who used LFW?</h3> <p> - This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns - to see yearly totals. These charts show at most the top 10 countries. + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. </p> </section> @@ -95,14 +95,7 @@ <!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> </div> --> <div class="applet" data-payload="{"command": "chart"}"></div> -</section><section> - <p> - These pie charts show overall totals based on country and institution type. - </p> - - </section> - -<section class="applet_container"> +</section><section class="applet_container"> <div class="applet" data-payload="{"command": "piechart"}"></div> </section><section> @@ -112,13 +105,13 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> Add [button/link] to download CSV. Add search input field to filter. diff --git a/site/public/datasets/market_1501/index.html b/site/public/datasets/market_1501/index.html index b7e68c47..9a05d20e 100644 --- a/site/public/datasets/market_1501/index.html +++ b/site/public/datasets/market_1501/index.html @@ -31,7 +31,7 @@ <p>(PAGE UNDER DEVELOPMENT)</p> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -43,30 +43,31 @@ </div> --> <p> - To understand how Market 1501 has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how Market 1501 has been used around the world for commercial, military and academic research; publicly available research citing Market 1501 Dataset is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> <ul class="map-legend"> <li class="edu">Academic</li> - <li class="com">Industry</li> - <li class="gov">Government / Military</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> </ul> </div> -<section> +<!-- <section> <p class='subp'> [section under development] Market 1501 ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <div class="hr-wave-holder"> @@ -74,13 +75,13 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> Add [button/link] to download CSV. Add search input field to filter. diff --git a/site/public/datasets/msceleb/index.html b/site/public/datasets/msceleb/index.html index 50788aad..0ddf0c68 100644 --- a/site/public/datasets/msceleb/index.html +++ b/site/public/datasets/msceleb/index.html @@ -35,8 +35,7 @@ <h3>Who used MsCeleb?</h3> <p> - This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns - to see yearly totals. These charts show at most the top 10 countries. + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. </p> </section> @@ -45,18 +44,11 @@ <!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> </div> --> <div class="applet" data-payload="{"command": "chart"}"></div> -</section><section> - <p> - These pie charts show overall totals based on country and institution type. - </p> - - </section> - -<section class="applet_container"> +</section><section class="applet_container"> <div class="applet" data-payload="{"command": "piechart"}"></div> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -68,30 +60,31 @@ </div> --> <p> - To understand how MsCeleb has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how MsCeleb has been used around the world for commercial, military and academic research; publicly available research citing Microsoft Celebrity Dataset is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> <ul class="map-legend"> <li class="edu">Academic</li> - <li class="com">Industry</li> - <li class="gov">Government / Military</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> </ul> </div> -<section> +<!-- <section> <p class='subp'> [section under development] MsCeleb ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section><p>Add more analysis here</p> +</section> + --><section><p>Add more analysis here</p> </section><section> @@ -100,13 +93,13 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> Add [button/link] to download CSV. Add search input field to filter. diff --git a/site/public/datasets/pipa/index.html b/site/public/datasets/pipa/index.html index 09baca99..9e7eb164 100644 --- a/site/public/datasets/pipa/index.html +++ b/site/public/datasets/pipa/index.html @@ -31,7 +31,7 @@ <p>(PAGE UNDER DEVELOPMENT)</p> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -43,30 +43,31 @@ </div> --> <p> - To understand how PIPA Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how PIPA Dataset has been used around the world for commercial, military and academic research; publicly available research citing People in Photo Albums Dataset is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> <ul class="map-legend"> <li class="edu">Academic</li> - <li class="com">Industry</li> - <li class="gov">Government / Military</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> </ul> </div> -<section> +<!-- <section> <p class='subp'> [section under development] PIPA Dataset ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <div class="hr-wave-holder"> @@ -74,13 +75,13 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> Add [button/link] to download CSV. Add search input field to filter. diff --git a/site/public/datasets/uccs/index.html b/site/public/datasets/uccs/index.html index ca106022..2477c9f8 100644 --- a/site/public/datasets/uccs/index.html +++ b/site/public/datasets/uccs/index.html @@ -4,7 +4,7 @@ <title>MegaPixels</title> <meta charset="utf-8" /> <meta name="author" content="Adam Harvey" /> - <meta name="description" content="Unconstrained College Students (UCCS) is a dataset of images ..." /> + <meta name="description" content="Unconstrained College Students (UCCS) is a dataset of long-range surveillance photos of students taken without their knowledge" /> <meta name="referrer" content="no-referrer" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> <link rel='stylesheet' href='/assets/css/fonts.css' /> @@ -26,12 +26,12 @@ </header> <div class="content content-dataset"> - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Unconstrained College Students (UCCS)</span> is a dataset of images ...</span></div><div class='hero_subdesc'><span class='bgpad'>The UCCS dataset includes ... -</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Collected</div><div>TBD</div></div><div><div class='gray'>Published</div><div>TBD</div></div><div><div class='gray'>Images</div><div>TBD</div></div><div><div class='gray'>Faces</div><div>TBD</div></div></div></div><h2>Unconstrained College Students ...</h2> + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Unconstrained College Students (UCCS)</span> is a dataset of long-range surveillance photos of students taken without their knowledge</span></div><div class='hero_subdesc'><span class='bgpad'>The UCCS dataset includes 16,149 images and 1,732 identities of students at University of Colorado Colorado Springs campus and is used for face recognition and face detection +</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Published</div><div>2018</div></div><div><div class='gray'>Images</div><div>16,149</div></div><div><div class='gray'>Identities</div><div>1,732</div></div><div><div class='gray'>Used for</div><div>Face recognition, face detection</div></div><div><div class='gray'>Created by</div><div>Unviversity of Colorado Colorado Springs (US)</div></div><div><div class='gray'>Funded by</div><div>ODNI, IARPA, ONR MURI, Amry SBIR, SOCOM SBIR</div></div><div><div class='gray'>Website</div><div><a href="https://vast.uccs.edu/Opensetface/">vast.uccs.edu</a></div></div></div></div><h2>Unconstrained College Students ...</h2> <p>(PAGE UNDER DEVELOPMENT)</p> -</section><section> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_mean_bboxes_comp.jpg' alt=' The pixel-average of all Uconstrained College Students images is shown with all 51,838 face annotations. (c) Adam Harvey'><div class='caption'> The pixel-average of all Uconstrained College Students images is shown with all 51,838 face annotations. (c) Adam Harvey</div></div></section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -43,35 +43,35 @@ </div> --> <p> - To understand how UCCS has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how UCCS has been used around the world for commercial, military and academic research; publicly available research citing UnConstrained College Students Dataset is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> <ul class="map-legend"> <li class="edu">Academic</li> - <li class="com">Industry</li> - <li class="gov">Government / Military</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> </ul> </div> -<section> +<!-- <section> <p class='subp'> [section under development] UCCS ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <h3>Who used UCCS?</h3> <p> - This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns - to see yearly totals. These charts show at most the top 10 countries. + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. </p> </section> @@ -80,14 +80,7 @@ <!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> </div> --> <div class="applet" data-payload="{"command": "chart"}"></div> -</section><section> - <p> - These pie charts show overall totals based on country and institution type. - </p> - - </section> - -<section class="applet_container"> +</section><section class="applet_container"> <div class="applet" data-payload="{"command": "piechart"}"></div> </section><section> @@ -97,20 +90,20 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> Add [button/link] to download CSV. Add search input field to filter. </p> <div class="applet" data-payload="{"command": "citations"}"></div> -</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_bboxes_grayscale.jpg' alt='Bounding box visualization'><div class='caption'>Bounding box visualization</div></div></section><section><h3>Research Notes</h3> +</section><section><h3>Research Notes</h3> <p>The original Sapkota and Boult dataset, from which UCCS is derived, received funding from<sup class="footnote-ref" id="fnref-funding_sb"><a href="#fn-funding_sb">1</a></sup>:</p> <ul> <li>ONR (Office of Naval Research) MURI (The Department of Defense Multidisciplinary University Research Initiative) grant N00014-08-1-0638</li> @@ -123,6 +116,11 @@ <li>ODNI (Office of Director of National Intelligence)</li> <li>IARPA (Intelligence Advance Research Projects Activity) R&D contract 2014-14071600012</li> </ul> +<p>" In most face detection/recognition datasets, the majority of images are “posed”, i.e. the subjects know they are being photographed, and/or the images are selected for publication in public media. Hence, blurry, occluded and badly illuminated images are generally uncommon in these datasets. In addition, most of these challenges are close-set, i.e. the list of subjects in the gallery is the same as the one used for testing.</p> +<p>This challenge explores more unconstrained data, by introducing the new UnConstrained College Students (UCCS) dataset, where subjects are photographed using a long-range high-resolution surveillance camera without their knowledge. Faces inside these images are of various poses, and varied levels of blurriness and occlusion. The challenge also creates an open set recognition problem, where unknown people will be seen during testing and must be rejected.</p> +<p>With this challenge, we hope to foster face detection and recognition research towards surveillance applications that are becoming more popular and more required nowadays, and where no automatic recognition algorithm has proven to be useful yet.</p> +<p>UnConstrained College Students (UCCS) Dataset</p> +<p>The UCCS dataset was collected over several months using Canon 7D camera fitted with Sigma 800mm F5.6 EX APO DG HSM lens, taking images at one frame per second, during times when many students were walking on the sidewalk. "</p> <div class="footnotes"> <hr> <ol><li id="fn-funding_sb"><p>Sapkota, Archana and Boult, Terrance. "Large Scale Unconstrained Open Set Face Database." 2013.<a href="#fnref-funding_sb" class="footnote">↩</a></p></li> diff --git a/site/public/datasets/viper/index.html b/site/public/datasets/viper/index.html index f78d1c04..e94568a3 100644 --- a/site/public/datasets/viper/index.html +++ b/site/public/datasets/viper/index.html @@ -35,8 +35,7 @@ <h3>Who used VIPeR?</h3> <p> - This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns - to see yearly totals. These charts show at most the top 10 countries. + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. </p> </section> @@ -45,18 +44,11 @@ <!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> </div> --> <div class="applet" data-payload="{"command": "chart"}"></div> -</section><section> - <p> - These pie charts show overall totals based on country and institution type. - </p> - - </section> - -<section class="applet_container"> +</section><section class="applet_container"> <div class="applet" data-payload="{"command": "piechart"}"></div> </section><section> - <h3>Information Supply Chain</h3> + <h3>Biometric Trade Routes</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -68,30 +60,31 @@ </div> --> <p> - To understand how VIPeR has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + To help understand how VIPeR has been used around the world for commercial, military and academic research; publicly available research citing Viewpoint Invariant Pedestrian Recognition is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. </p> </section> -<section class="applet_container"> +<section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> + </section> <div class="caption"> <ul class="map-legend"> <li class="edu">Academic</li> - <li class="com">Industry</li> - <li class="gov">Government / Military</li> + <li class="com">Commercial</li> + <li class="gov">Military / Government</li> <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> </ul> </div> -<section> +<!-- <section> <p class='subp'> [section under development] VIPeR ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> -</section><section> +</section> + --><section> <div class="hr-wave-holder"> @@ -99,13 +92,13 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h2>Supplementary Information</h2> + <h3>Supplementary Information</h3> + </section><section class="applet_container"> - <h3>Citations</h3> + <h3>Dataset Citations</h3> <p> - Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates - and indexes research papers. The citations were geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> <p> Add [button/link] to download CSV. Add search input field to filter. |
