diff options
Diffstat (limited to 'site/public/datasets')
| -rw-r--r-- | site/public/datasets/50_people_one_question/index.html | 6 | ||||
| -rw-r--r-- | site/public/datasets/brainwash/index.html | 8 | ||||
| -rw-r--r-- | site/public/datasets/celeba/index.html | 6 | ||||
| -rw-r--r-- | site/public/datasets/cofw/index.html | 6 | ||||
| -rw-r--r-- | site/public/datasets/duke_mtmc/index.html | 172 | ||||
| -rw-r--r-- | site/public/datasets/hrt_transgender/index.html | 119 | ||||
| -rw-r--r-- | site/public/datasets/index.html | 50 | ||||
| -rw-r--r-- | site/public/datasets/lfw/index.html | 8 | ||||
| -rw-r--r-- | site/public/datasets/market_1501/index.html | 123 | ||||
| -rw-r--r-- | site/public/datasets/pipa/index.html | 109 | ||||
| -rw-r--r-- | site/public/datasets/uccs/index.html | 98 | ||||
| -rw-r--r-- | site/public/datasets/viper/index.html | 8 |
12 files changed, 655 insertions, 58 deletions
diff --git a/site/public/datasets/50_people_one_question/index.html b/site/public/datasets/50_people_one_question/index.html index 3a854d50..73f9be97 100644 --- a/site/public/datasets/50_people_one_question/index.html +++ b/site/public/datasets/50_people_one_question/index.html @@ -33,7 +33,7 @@ <p>Nam libero tempore, cum soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptates repudiandae sint et molestiae non-recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat</p> </section><section> - <h3>Biometric Trade Routes (beta)</h3> + <h3>Information Supply Chain</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -46,7 +46,7 @@ --> <p> To understand how 50 People One Question Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast </p> </section> @@ -64,7 +64,7 @@ <section> <p class='subp'> - The data is generated by collecting all citations for all original research papers associated with the dataset. Then the PDFs are then converted to text and the organization names are extracted and geocoded. Because of the automated approach to extracting data, actual use of the dataset can not yet be confirmed. This visualization is provided to help locate and confirm usage and will be updated as data noise is reduced. + Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> </section><section> diff --git a/site/public/datasets/brainwash/index.html b/site/public/datasets/brainwash/index.html index 55c1b977..64dcdda7 100644 --- a/site/public/datasets/brainwash/index.html +++ b/site/public/datasets/brainwash/index.html @@ -26,7 +26,7 @@ </header> <div class="content content-dataset"> - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span style='color: #ffaa00'>Brainwash</span> is a dataset of webcam images taken from the Brainwash Cafe in San Francisco</span></div><div class='hero_subdesc'><span class='bgpad'>The Brainwash dataset includes 11,918 images of "everyday life of a busy downtown cafe" and is used for training head detection algorithms + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco</span></div><div class='hero_subdesc'><span class='bgpad'>The Brainwash dataset includes 11,918 images of "everyday life of a busy downtown cafe" and is used for training head detection algorithms </span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Published</div><div>2015</div></div><div><div class='gray'>Images</div><div>11,918</div></div><div><div class='gray'>Faces</div><div>91,146</div></div><div><div class='gray'>Created by</div><div>Stanford Department of Computer Science</div></div><div><div class='gray'>Funded by</div><div>Max Planck Center for Visual Computing and Communication</div></div><div><div class='gray'>Location</div><div>Brainwash Cafe, San Franscisco</div></div><div><div class='gray'>Purpose</div><div>Training face detection</div></div><div><div class='gray'>Website</div><div><a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">stanford.edu</a></div></div><div><div class='gray'>Paper</div><div><a href="http://arxiv.org/abs/1506.04878">End-to-End People Detection in Crowded Scenes</a></div></div><div><div class='gray'>Explicit Consent</div><div>No</div></div></div></div><h2>Brainwash Dataset</h2> <p>(PAGE UNDER DEVELOPMENT)</p> <p><em>Brainwash</em> is a face detection dataset created from the Brainwash Cafe's livecam footage including 11,918 images of "everyday life of a busy downtown cafe<a class="footnote_shim" name="[^readme]_1"> </a><a href="#[^readme]" class="footnote" title="Footnote 1">1</a>". The images are used to develop face detection algorithms for the "challenging task of detecting people in crowded scenes" and tracking them.</p> @@ -46,7 +46,7 @@ <div class="applet" data-payload="{"command": "chart"}"></div> </section><section> - <h3>Biometric Trade Routes (beta)</h3> + <h3>Information Supply Chain</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -59,7 +59,7 @@ --> <p> To understand how Brainwash Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast </p> </section> @@ -77,7 +77,7 @@ <section> <p class='subp'> - The data is generated by collecting all citations for all original research papers associated with the dataset. Then the PDFs are then converted to text and the organization names are extracted and geocoded. Because of the automated approach to extracting data, actual use of the dataset can not yet be confirmed. This visualization is provided to help locate and confirm usage and will be updated as data noise is reduced. + Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> </section><section><p>Add more analysis here</p> </section><section> diff --git a/site/public/datasets/celeba/index.html b/site/public/datasets/celeba/index.html index 024f842f..50e460b7 100644 --- a/site/public/datasets/celeba/index.html +++ b/site/public/datasets/celeba/index.html @@ -33,7 +33,7 @@ <p>Nam libero tempore, cum soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptates repudiandae sint et molestiae non-recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat</p> </section><section> - <h3>Biometric Trade Routes (beta)</h3> + <h3>Information Supply Chain</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -46,7 +46,7 @@ --> <p> To understand how CelebA Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast </p> </section> @@ -64,7 +64,7 @@ <section> <p class='subp'> - The data is generated by collecting all citations for all original research papers associated with the dataset. Then the PDFs are then converted to text and the organization names are extracted and geocoded. Because of the automated approach to extracting data, actual use of the dataset can not yet be confirmed. This visualization is provided to help locate and confirm usage and will be updated as data noise is reduced. + Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> </section><section> diff --git a/site/public/datasets/cofw/index.html b/site/public/datasets/cofw/index.html index 605a325a..31c577a2 100644 --- a/site/public/datasets/cofw/index.html +++ b/site/public/datasets/cofw/index.html @@ -43,7 +43,7 @@ To increase the number of training images, and since COFW has the exact same la <p><a href="https://www.cs.cmu.edu/~peiyunh/topdown/">https://www.cs.cmu.edu/~peiyunh/topdown/</a></p> </section><section> - <h3>Biometric Trade Routes (beta)</h3> + <h3>Information Supply Chain</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -56,7 +56,7 @@ To increase the number of training images, and since COFW has the exact same la --> <p> To understand how COFW Dataset has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast </p> </section> @@ -74,7 +74,7 @@ To increase the number of training images, and since COFW has the exact same la <section> <p class='subp'> - The data is generated by collecting all citations for all original research papers associated with the dataset. Then the PDFs are then converted to text and the organization names are extracted and geocoded. Because of the automated approach to extracting data, actual use of the dataset can not yet be confirmed. This visualization is provided to help locate and confirm usage and will be updated as data noise is reduced. + Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> </section><section> diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html new file mode 100644 index 00000000..40eb8c7e --- /dev/null +++ b/site/public/datasets/duke_mtmc/index.html @@ -0,0 +1,172 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="Duke MTMC is a dataset of CCTV footage of students at Duke University" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/tabulator.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + </div> + </header> + <div class="content content-dataset"> + + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Duke MTMC</span> is a dataset of CCTV footage of students at Duke University</span></div><div class='hero_subdesc'><span class='bgpad'>Duke MTMC contains over 2 million video frames and 2,000 unique identities collected from 8 cameras at Duke University campus in March 2014 +</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Collected</div><div>March 19, 2014</div></div><div><div class='gray'>Cameras</div><div>8</div></div><div><div class='gray'>Video Frames</div><div>2,000,000</div></div><div><div class='gray'>Identities</div><div>Over 2,000</div></div><div><div class='gray'>Used for</div><div>Person re-identification, <br>face recognition</div></div><div><div class='gray'>Sector</div><div>Academic</div></div><div><div class='gray'>Website</div><div><a href="http://vision.cs.duke.edu/DukeMTMC/">duke.edu</a></div></div></div></div><h2>Duke Multi-Target, Multi-Camera Tracking Dataset (Duke MTMC)</h2> +<p>(PAGE UNDER DEVELOPMENT)</p> +</section><section> + + <h3>Information Supply Chain</h3> +<!-- + <div class="map-sidebar right-sidebar"> + <h3>Legend</h3> + <ul> + <li><span style="color: #f2f293">■</span> Industry</li> + <li><span style="color: #f30000">■</span> Academic</li> + <li><span style="color: #3264f6">■</span> Government</li> + </ul> + </div> + --> + <p> + To understand how Duke MTMC Dataset has been used around the world... + affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + </p> + + </section> + +<section class="applet_container"> + <div class="applet" data-payload="{"command": "map"}"></div> +</section> + +<div class="caption"> + <div class="map-legend-item edu">Academic</div> + <div class="map-legend-item com">Industry</div> + <div class="map-legend-item gov">Government</div> + Data is compiled from <a href="https://www.semanticscholar.org">Semantic Scholar</a> and not yet manually verified. +</div> + +<section> + <p class='subp'> + Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + </p> +</section><section> + + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h2>Supplementary Information</h2> +</section><section class="applet_container"> + + <h3>Citations</h3> + <p> + Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates + and indexes research papers. Metadata was extracted from these papers, including extracting names of institutions automatically from PDFs, and then the addresses were geocoded. Data is not yet manually verified, and reflects anytime the paper was cited. Some papers may only mention the dataset in passing, while others use it as part of their research methodology. + </p> + <p> + Add button/link to download CSV + </p> + + <div class="applet" data-payload="{"command": "citations"}"></div> +</section><section><h2>Research Notes</h2> +<ul> +<li>"DukeMTMC aims to accelerate advances in multi-target multi-camera tracking. It provides a tracking system that works within and across cameras, a new large scale HD video data set recorded by 8 synchronized cameras with more than 7,000 single camera trajectories and over 2,000 unique identities, and a new performance evaluation method that measures how often a system is correct about who is where"</li> +<li><p>DukeMTMC is a new, manually annotated, calibrated, multi-camera data set recorded outdoors on the Duke University campus with 8 synchronized cameras. It consists of:</p> +<p>8 static cameras x 85 minutes of 1080p 60 fps video + More than 2,000,000 manually annotated frames + More than 2,000 identities + Manual annotation by 5 people over 1 year + More identities than all existing MTMC datasets combined + Unconstrained paths, diverse appearance</p> +</li> +<li>DukeMTMC Project +Ergys Ristani Ergys Ristani Ergys Ristani Ergys Ristani Ergys Ristani</li> +</ul> +<p>People involved: +Ergys Ristani, Francesco Solera, Roger S. Zou, Rita Cucchiara, Carlo Tomasi.</p> +<p>Navigation:</p> +<p>Data Set + Downloads + Downloads + Dataset Extensions + Performance Measures + Tracking Systems + Publications + How to Cite + Contact</p> +<p>Welcome to the Duke Multi-Target, Multi-Camera Tracking Project.</p> +<p>DukeMTMC aims to accelerate advances in multi-target multi-camera tracking. It provides a tracking system that works within and across cameras, a new large scale HD video data set recorded by 8 synchronized cameras with more than 7,000 single camera trajectories and over 2,000 unique identities, and a new performance evaluation method that measures how often a system is correct about who is where. +DukeMTMC Data Set +Snapshot from the DukeMTMC data set.</p> +<p>DukeMTMC is a new, manually annotated, calibrated, multi-camera data set recorded outdoors on the Duke University campus with 8 synchronized cameras. It consists of:</p> +<p>8 static cameras x 85 minutes of 1080p 60 fps video + More than 2,000,000 manually annotated frames + More than 2,000 identities + Manual annotation by 5 people over 1 year + More identities than all existing MTMC datasets combined + Unconstrained paths, diverse appearance</p> +<p>News</p> +<p>05 Feb 2019 We are organizing the 2nd Workshop on MTMCT and ReID at CVPR 2019 + 25 Jul 2018: The code for DeepCC is available on github + 28 Feb 2018: OpenPose detections now available for download + 19 Feb 2018: Our DeepCC tracker has been accepted to CVPR 2018 + 04 Oct 2017: A new blog post describes ID measures of performance + 26 Jul 2017: Slides from the BMTT 2017 workshop are now available + 09 Dec 2016: DukeMTMC is now hosted on MOTChallenge</p> +<p>DukeMTMC Downloads</p> +<p>DukeMTMC dataset (tracking)</p> +<p>Dataset Extensions</p> +<p>Below is a list of dataset extensions provided by the community:</p> +<p>DukeMTMC-VideoReID (download) + DukeMTMC-reID (download) + DukeMTMC4REID + DukeMTMC-attribute</p> +<p>If you use or extend DukeMTMC, please refer to the license terms. +DukeMTMCT Benchmark</p> +<p>DukeMTMCT is a tracking benchmark hosted on motchallenge.net. Click here for the up-to-date rankings. Here you will find the official motchallenge-devkit used for evaluation by MOTChallenge. For detailed instructions how to submit on motchallenge you can refer to this link.</p> +<p>Trackers are ranked using our identity-based measures which compute how often the system is correct about who is where, regardless of how often a target is lost and reacquired. Our measures are useful in applications such as security, surveillance or sports. This short post describes our measures with illustrations, while for details you can refer to the original paper. +Tracking Systems</p> +<p>We provide code for the following tracking systems which are all based on Correlation Clustering optimization:</p> +<p>DeepCC for single- and multi-camera tracking [1] + Single-Camera Tracker (demo video) [2] + Multi-Camera Tracker (demo video, failure cases) [2] + People-Groups Tracker [3] + Original Single-Camera Tracker [4]</p> +</section> + + </div> + <footer> + <div> + <a href="/">MegaPixels.cc</a> + <a href="/about/disclaimer/">Disclaimer</a> + <a href="/about/terms/">Terms of Use</a> + <a href="/about/privacy/">Privacy</a> + <a href="/about/">About</a> + <a href="/about/team/">Team</a> + </div> + <div> + MegaPixels ©2017-19 Adam R. Harvey / + <a href="https://ahprojects.com">ahprojects.com</a> + </div> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/datasets/hrt_transgender/index.html b/site/public/datasets/hrt_transgender/index.html new file mode 100644 index 00000000..5211ac7e --- /dev/null +++ b/site/public/datasets/hrt_transgender/index.html @@ -0,0 +1,119 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="TBD" /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/tabulator.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + </div> + </header> + <div class="content content-dataset"> + + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/hrt_transgender/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>TBD</span></div><div class='hero_subdesc'><span class='bgpad'>TBD +</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Published</div><div>TBD</div></div><div><div class='gray'>Images</div><div>TBD</div></div></div></div><h2>HRT Transgender Dataset</h2> +</section><section> + <h3>Who used HRT Transgender?</h3> + + <p> + This bar chart presents a ranking of the top countries where citations originated. Mouse over individual columns + to see yearly totals. Colors are only assigned to the top 10 overall countries. + </p> + + </section> + +<section class="applet_container"> + <div class="applet" data-payload="{"command": "chart"}"></div> +</section><section> + + <h3>Information Supply Chain</h3> +<!-- + <div class="map-sidebar right-sidebar"> + <h3>Legend</h3> + <ul> + <li><span style="color: #f2f293">■</span> Industry</li> + <li><span style="color: #f30000">■</span> Academic</li> + <li><span style="color: #3264f6">■</span> Government</li> + </ul> + </div> + --> + <p> + To understand how HRT Transgender has been used around the world... + affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + </p> + + </section> + +<section class="applet_container"> + <div class="applet" data-payload="{"command": "map"}"></div> +</section> + +<div class="caption"> + <div class="map-legend-item edu">Academic</div> + <div class="map-legend-item com">Industry</div> + <div class="map-legend-item gov">Government</div> + Data is compiled from <a href="https://www.semanticscholar.org">Semantic Scholar</a> and not yet manually verified. +</div> + +<section> + <p class='subp'> + Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + </p> +</section><section> + + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h2>Supplementary Information</h2> +</section><section class="applet_container"> + + <h3>Citations</h3> + <p> + Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates + and indexes research papers. Metadata was extracted from these papers, including extracting names of institutions automatically from PDFs, and then the addresses were geocoded. Data is not yet manually verified, and reflects anytime the paper was cited. Some papers may only mention the dataset in passing, while others use it as part of their research methodology. + </p> + <p> + Add button/link to download CSV + </p> + + <div class="applet" data-payload="{"command": "citations"}"></div> +</section> + + </div> + <footer> + <div> + <a href="/">MegaPixels.cc</a> + <a href="/about/disclaimer/">Disclaimer</a> + <a href="/about/terms/">Terms of Use</a> + <a href="/about/privacy/">Privacy</a> + <a href="/about/">About</a> + <a href="/about/team/">Team</a> + </div> + <div> + MegaPixels ©2017-19 Adam R. Harvey / + <a href="https://ahprojects.com">ahprojects.com</a> + </div> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/datasets/index.html b/site/public/datasets/index.html index 2a412322..f618e86b 100644 --- a/site/public/datasets/index.html +++ b/site/public/datasets/index.html @@ -28,7 +28,7 @@ <section><h1>Facial Recognition Datasets</h1> -<h3>Case Studies</h3> +<h3>Survey</h3> </section> <section class='applet_container autosize'><div class='applet' data-payload='{"command":"dataset_list"}'></div></section> @@ -49,6 +49,18 @@ </div> </a> + <a href="/datasets/duke_mtmc/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/index.jpg)"> + <div class="dataset"> + <span class='title'>Duke Multi-Target, Multi-Camera Tracking</span> + <div class='fields'> + <div class='year visible'><span>2016</span></div> + <div class='purpose'><span>Person re-identification and multi-camera tracking</span></div> + <div class='images'><span>2,000,000 images</span></div> + <div class='identities'><span>1,812 </span></div> + </div> + </div> + </a> + <a href="/datasets/lfw/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/lfw/assets/index.jpg)"> <div class="dataset"> <span class='title'>Labeled Faces in The Wild</span> @@ -61,14 +73,38 @@ </div> </a> - <a href="/datasets/mars/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/mars/assets/index.jpg)"> + <a href="/datasets/market_1501/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/market_1501/assets/index.jpg)"> <div class="dataset"> - <span class='title'>MARS</span> + <span class='title'>Market-1501</span> <div class='fields'> - <div class='year visible'><span>2016</span></div> - <div class='purpose'><span>Motion analysis and person re-identification</span></div> - <div class='images'><span>1,191,003 images</span></div> - <div class='identities'><span>1,261 </span></div> + <div class='year visible'><span>2015</span></div> + <div class='purpose'><span>Person re-identification</span></div> + <div class='images'><span>32,668 images</span></div> + <div class='identities'><span>1,501 </span></div> + </div> + </div> + </a> + + <a href="/datasets/pipa/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/pipa/assets/index.jpg)"> + <div class="dataset"> + <span class='title'>People in Photo Albums</span> + <div class='fields'> + <div class='year visible'><span>2015</span></div> + <div class='purpose'><span>Face recognition</span></div> + <div class='images'><span>37,107 images</span></div> + <div class='identities'><span>2,356 </span></div> + </div> + </div> + </a> + + <a href="/datasets/uccs/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/index.jpg)"> + <div class="dataset"> + <span class='title'>Unconstrained College Students</span> + <div class='fields'> + <div class='year visible'><span>2018</span></div> + <div class='purpose'><span>Unconstrained face recognition</span></div> + <div class='images'><span>16,149 images</span></div> + <div class='identities'><span>4,362 </span></div> </div> </div> </a> diff --git a/site/public/datasets/lfw/index.html b/site/public/datasets/lfw/index.html index 477673e2..7e3a1bd5 100644 --- a/site/public/datasets/lfw/index.html +++ b/site/public/datasets/lfw/index.html @@ -26,7 +26,7 @@ </header> <div class="content content-"> - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/lfw/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span style='color: #ff0000'>Labeled Faces in The Wild</span> (LFW) is the first facial recognition dataset created entirely from online photos</span></div><div class='hero_subdesc'><span class='bgpad'>It includes 13,456 images of 4,432 people's images copied from the Internet during 2002-2004 and is the most frequently used dataset in the world for benchmarking face recognition algorithms. + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/lfw/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Labeled Faces in The Wild (LFW)</span> is the first facial recognition dataset created entirely from online photos</span></div><div class='hero_subdesc'><span class='bgpad'>It includes 13,456 images of 4,432 people's images copied from the Internet during 2002-2004 and is the most frequently used dataset in the world for benchmarking face recognition algorithms. </span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Created</div><div>2002 – 2004</div></div><div><div class='gray'>Images</div><div>13,233</div></div><div><div class='gray'>Identities</div><div>5,749</div></div><div><div class='gray'>Origin</div><div>Yahoo! News Images</div></div><div><div class='gray'>Used by</div><div>Facebook, Google, Microsoft, Baidu, Tencent, SenseTime, Face++, CIA, NSA, IARPA</div></div><div><div class='gray'>Website</div><div><a href="http://vis-www.cs.umass.edu/lfw">umass.edu</a></div></div></div><ul> <li>There are about 3 men for every 1 woman in the LFW dataset<a class="footnote_shim" name="[^lfw_www]_1"> </a><a href="#[^lfw_www]" class="footnote" title="Footnote 1">1</a></li> <li>The person with the most images is <a href="http://vis-www.cs.umass.edu/lfw/person/George_W_Bush_comp.html">George W. Bush</a> with 530</li> @@ -46,7 +46,7 @@ <p>The <em>Names and Faces</em> dataset was the first face recognition dataset created entire from online photos. However, <em>Names and Faces</em> and <em>LFW</em> are not the first face recognition dataset created entirely "in the wild". That title belongs to the <a href="/datasets/ucd_faces/">UCD dataset</a>. Images obtained "in the wild" means using an image without explicit consent or awareness from the subject or photographer.</p> </section><section> - <h3>Biometric Trade Routes (beta)</h3> + <h3>Information Supply Chain</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -59,7 +59,7 @@ --> <p> To understand how LFW has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast </p> </section> @@ -77,7 +77,7 @@ <section> <p class='subp'> - The data is generated by collecting all citations for all original research papers associated with the dataset. Then the PDFs are then converted to text and the organization names are extracted and geocoded. Because of the automated approach to extracting data, actual use of the dataset can not yet be confirmed. This visualization is provided to help locate and confirm usage and will be updated as data noise is reduced. + Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> </section><section> <h3>Who used LFW?</h3> diff --git a/site/public/datasets/market_1501/index.html b/site/public/datasets/market_1501/index.html new file mode 100644 index 00000000..2d357f47 --- /dev/null +++ b/site/public/datasets/market_1501/index.html @@ -0,0 +1,123 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content="Market-1501 is a dataset is collection of CCTV footage from ..." /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/tabulator.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + </div> + </header> + <div class="content content-dataset"> + + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/market_1501/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Market-1501</span> is a dataset is collection of CCTV footage from ...</span></div><div class='hero_subdesc'><span class='bgpad'>The Market-1501 dataset includes ... +</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Collected</div><div>TBD</div></div><div><div class='gray'>Published</div><div>TBD</div></div><div><div class='gray'>Images</div><div>TBD</div></div><div><div class='gray'>Faces</div><div>TBD</div></div></div></div><h2>Market-1501 ...</h2> +<p>(PAGE UNDER DEVELOPMENT)</p> +</section><section> + + <h3>Information Supply Chain</h3> +<!-- + <div class="map-sidebar right-sidebar"> + <h3>Legend</h3> + <ul> + <li><span style="color: #f2f293">■</span> Industry</li> + <li><span style="color: #f30000">■</span> Academic</li> + <li><span style="color: #3264f6">■</span> Government</li> + </ul> + </div> + --> + <p> + To understand how Market 1501 has been used around the world... + affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + </p> + + </section> + +<section class="applet_container"> + <div class="applet" data-payload="{"command": "map"}"></div> +</section> + +<div class="caption"> + <div class="map-legend-item edu">Academic</div> + <div class="map-legend-item com">Industry</div> + <div class="map-legend-item gov">Government</div> + Data is compiled from <a href="https://www.semanticscholar.org">Semantic Scholar</a> and not yet manually verified. +</div> + +<section> + <p class='subp'> + Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + </p> +</section><section> + + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h2>Supplementary Information</h2> +</section><section class="applet_container"> + + <h3>Citations</h3> + <p> + Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates + and indexes research papers. Metadata was extracted from these papers, including extracting names of institutions automatically from PDFs, and then the addresses were geocoded. Data is not yet manually verified, and reflects anytime the paper was cited. Some papers may only mention the dataset in passing, while others use it as part of their research methodology. + </p> + <p> + Add button/link to download CSV + </p> + + <div class="applet" data-payload="{"command": "citations"}"></div> +</section><section><h2>Research Notes</h2> +<ul> +<li>"MARS is an extension of the Market-1501 dataset. During collection, we placed six near synchronized cameras in the campus of Tsinghua university. There were Five 1,080<em>1920 HD cameras and one 640</em>480 SD camera. MARS consists of 1,261 different pedestrians whom are captured by at least 2 cameras. Given a query tracklet, MARS aims to retrieve tracklets that contain the same ID." - main paper</li> +<li>bbox "0065C1T0002F0016.jpg", "0065" is the ID of the pedestrian. "C1" denotes the first +camera (there are totally 6 cameras). "T0002" means the 2th tracklet. "F016" is the 16th frame +within this tracklet. For the tracklets, their names are accumulated for each ID; but for frames, +they start from "F001" in each tracklet.</li> +</ul> +<p>@proceedings{zheng2016mars, +title={MARS: A Video Benchmark for Large-Scale Person Re-identification}, +author={Zheng, Liang and Bie, Zhi and Sun, Yifan and Wang, Jingdong and Su, Chi and Wang, Shengjin and Tian, Qi}, +booktitle={European Conference on Computer Vision}, +year={2016}, +organization={Springer} +}</p> +</section> + + </div> + <footer> + <div> + <a href="/">MegaPixels.cc</a> + <a href="/about/disclaimer/">Disclaimer</a> + <a href="/about/terms/">Terms of Use</a> + <a href="/about/privacy/">Privacy</a> + <a href="/about/">About</a> + <a href="/about/team/">Team</a> + </div> + <div> + MegaPixels ©2017-19 Adam R. Harvey / + <a href="https://ahprojects.com">ahprojects.com</a> + </div> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/datasets/pipa/index.html b/site/public/datasets/pipa/index.html new file mode 100644 index 00000000..dca75724 --- /dev/null +++ b/site/public/datasets/pipa/index.html @@ -0,0 +1,109 @@ +<!doctype html> +<html> +<head> + <title>MegaPixels</title> + <meta charset="utf-8" /> + <meta name="author" content="Adam Harvey" /> + <meta name="description" content=" is a dataset..." /> + <meta name="referrer" content="no-referrer" /> + <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> + <link rel='stylesheet' href='/assets/css/fonts.css' /> + <link rel='stylesheet' href='/assets/css/tabulator.css' /> + <link rel='stylesheet' href='/assets/css/css.css' /> + <link rel='stylesheet' href='/assets/css/leaflet.css' /> + <link rel='stylesheet' href='/assets/css/applets.css' /> +</head> +<body> + <header> + <a class='slogan' href="/"> + <div class='logo'></div> + <div class='site_name'>MegaPixels</div> + </a> + <div class='links'> + <a href="/datasets/">Datasets</a> + <a href="/about/">About</a> + </div> + </header> + <div class="content content-dataset"> + + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/pipa/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name"> is a dataset...</span></div><div class='hero_subdesc'><span class='bgpad'>PIPA subdescription +</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Collected</div><div>TBD</div></div><div><div class='gray'>Published</div><div>TBD</div></div><div><div class='gray'>Images</div><div>TBD</div></div><div><div class='gray'>Faces</div><div>TBD</div></div></div></div><h2>Dataset Title TBD</h2> +<p>(PAGE UNDER DEVELOPMENT)</p> +</section><section> + + <h3>Information Supply Chain</h3> +<!-- + <div class="map-sidebar right-sidebar"> + <h3>Legend</h3> + <ul> + <li><span style="color: #f2f293">■</span> Industry</li> + <li><span style="color: #f30000">■</span> Academic</li> + <li><span style="color: #3264f6">■</span> Government</li> + </ul> + </div> + --> + <p> + To understand how PIPA Dataset has been used around the world... + affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + </p> + + </section> + +<section class="applet_container"> + <div class="applet" data-payload="{"command": "map"}"></div> +</section> + +<div class="caption"> + <div class="map-legend-item edu">Academic</div> + <div class="map-legend-item com">Industry</div> + <div class="map-legend-item gov">Government</div> + Data is compiled from <a href="https://www.semanticscholar.org">Semantic Scholar</a> and not yet manually verified. +</div> + +<section> + <p class='subp'> + Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + </p> +</section><section> + + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h2>Supplementary Information</h2> +</section><section class="applet_container"> + + <h3>Citations</h3> + <p> + Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates + and indexes research papers. Metadata was extracted from these papers, including extracting names of institutions automatically from PDFs, and then the addresses were geocoded. Data is not yet manually verified, and reflects anytime the paper was cited. Some papers may only mention the dataset in passing, while others use it as part of their research methodology. + </p> + <p> + Add button/link to download CSV + </p> + + <div class="applet" data-payload="{"command": "citations"}"></div> +</section><section><h2>Research Notes</h2> +</section> + + </div> + <footer> + <div> + <a href="/">MegaPixels.cc</a> + <a href="/about/disclaimer/">Disclaimer</a> + <a href="/about/terms/">Terms of Use</a> + <a href="/about/privacy/">Privacy</a> + <a href="/about/">About</a> + <a href="/about/team/">Team</a> + </div> + <div> + MegaPixels ©2017-19 Adam R. Harvey / + <a href="https://ahprojects.com">ahprojects.com</a> + </div> + </footer> +</body> + +<script src="/assets/js/dist/index.js"></script> +</html>
\ No newline at end of file diff --git a/site/public/datasets/uccs/index.html b/site/public/datasets/uccs/index.html index 0283bf3b..336d5f01 100644 --- a/site/public/datasets/uccs/index.html +++ b/site/public/datasets/uccs/index.html @@ -4,7 +4,7 @@ <title>MegaPixels</title> <meta charset="utf-8" /> <meta name="author" content="Adam Harvey" /> - <meta name="description" content="UCCS: Unconstrained College Students" /> + <meta name="description" content="Unconstrained College Students (UCCS) is a dataset of images ..." /> <meta name="referrer" content="no-referrer" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" /> <link rel='stylesheet' href='/assets/css/fonts.css' /> @@ -24,47 +24,85 @@ <a href="/about/">About</a> </div> </header> - <div class="content content-"> + <div class="content content-dataset"> - <section><h1>Unconstrained College Students</h1> -</section><section><div class='meta'><div><div class='gray'>Years</div><div>2012-2013</div></div><div><div class='gray'>Images</div><div>16,149</div></div><div><div class='gray'>Identities</div><div>4,362</div></div><div><div class='gray'>Origin</div><div>Colorado Springs Campus</div></div></div><section><section class='fullwidth'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/images/uccs_index.gif' alt='Pixellated and redacted example image from the UCCS dataset. ©Adam Harvey'><div class='caption'>Pixellated and redacted example image from the UCCS dataset. ©Adam Harvey</div></div></section><section><p><strong>Unconstrained College Students</strong> is a large-scale, unconstrained face detection and recognition dataset. It includes</p> -<p>The UCCS includes...</p> -<h3>Funding Sources</h3> -<p>The original Sapkota and Boult dataset, from which UCCS is derived, received funding from[^funding_sb]:</p> + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Unconstrained College Students (UCCS)</span> is a dataset of images ...</span></div><div class='hero_subdesc'><span class='bgpad'>The UCCS dataset includes ... +</span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Collected</div><div>TBD</div></div><div><div class='gray'>Published</div><div>TBD</div></div><div><div class='gray'>Images</div><div>TBD</div></div><div><div class='gray'>Faces</div><div>TBD</div></div></div></div><h2>Unconstrained College Students ...</h2> +<p>(PAGE UNDER DEVELOPMENT)</p> +</section><section> + + <h3>Information Supply Chain</h3> +<!-- + <div class="map-sidebar right-sidebar"> + <h3>Legend</h3> + <ul> + <li><span style="color: #f2f293">■</span> Industry</li> + <li><span style="color: #f30000">■</span> Academic</li> + <li><span style="color: #3264f6">■</span> Government</li> + </ul> + </div> + --> + <p> + To understand how UCCS has been used around the world... + affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + </p> + + </section> + +<section class="applet_container"> + <div class="applet" data-payload="{"command": "map"}"></div> +</section> + +<div class="caption"> + <div class="map-legend-item edu">Academic</div> + <div class="map-legend-item com">Industry</div> + <div class="map-legend-item gov">Government</div> + Data is compiled from <a href="https://www.semanticscholar.org">Semantic Scholar</a> and not yet manually verified. +</div> + +<section> + <p class='subp'> + Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. + </p> +</section><section> + + + <div class="hr-wave-holder"> + <div class="hr-wave-line hr-wave-line1"></div> + <div class="hr-wave-line hr-wave-line2"></div> + </div> + + <h2>Supplementary Information</h2> +</section><section class="applet_container"> + + <h3>Citations</h3> + <p> + Citations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates + and indexes research papers. Metadata was extracted from these papers, including extracting names of institutions automatically from PDFs, and then the addresses were geocoded. Data is not yet manually verified, and reflects anytime the paper was cited. Some papers may only mention the dataset in passing, while others use it as part of their research methodology. + </p> + <p> + Add button/link to download CSV + </p> + + <div class="applet" data-payload="{"command": "citations"}"></div> +</section><section><h3>Research Notes</h3> +<p>The original Sapkota and Boult dataset, from which UCCS is derived, received funding from<sup class="footnote-ref" id="fnref-funding_sb"><a href="#fn-funding_sb">1</a></sup>:</p> <ul> <li>ONR (Office of Naval Research) MURI (The Department of Defense Multidisciplinary University Research Initiative) grant N00014-08-1-0638</li> <li>Army SBIR (Small Business Innovation Research) grant W15P7T-12-C-A210</li> <li>SOCOM (Special Operations Command) SBIR (Small Business Innovation Research) grant H92222-07-P-0020</li> </ul> -<p>The more recent UCCS version of the dataset received funding from [^funding_uccs]:</p> +<p>The more recent UCCS version of the dataset received funding from <sup class="footnote-ref" id="fnref-funding_uccs"><a href="#fn-funding_uccs">2</a></sup>:</p> <ul> <li>National Science Foundation Grant IIS-1320956</li> <li>ODNI (Office of Director of National Intelligence)</li> <li>IARPA (Intelligence Advance Research Projects Activity) R&D contract 2014-14071600012</li> </ul> -<h3>Citations</h3> -<p>[add map here]</p> -<p>[add citations table here]</p> -</section><section class='fullwidth'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/images/uccs_synthetic.jpg' alt='Pixellated and redacted example image from the UCCS dataset. ©Adam Harvey'><div class='caption'>Pixellated and redacted example image from the UCCS dataset. ©Adam Harvey</div></div></section><section><h3>Notes</h3> -<ul> -<li>Images from UCCS are not available for public display. Instead a pixellated, redacted, and colored interpretation has been displayed here. The full images are available here.</li> -<li>Images can be downloaded from...</li> -</ul> -<h3>Resources</h3> -<ul> -<li>Download video</li> -<li>links to UCCS</li> -<li>download synthetic images</li> -</ul> -<h3>Image Terms of Use</h3> -<ul> -<li>All images are ©Adam Harvey / megapixels.cc</li> -<li>You are welcomed to use these images for academic and journalistic use including for research papers, news stories, presentations. </li> -<li>Please use the following citation:</li> -</ul> -</section><section class='applet_container'><div class='applet' data-payload='{"command": "MegaPixels.cc Adam Harvey 2013-2019."}'></div></section><section><div class="footnotes"> +<div class="footnotes"> <hr> -<ol></ol> +<ol><li id="fn-funding_sb"><p>Sapkota, Archana and Boult, Terrance. "Large Scale Unconstrained Open Set Face Database." 2013.<a href="#fnref-funding_sb" class="footnote">↩</a></p></li> +<li id="fn-funding_uccs"><p>Günther, M. et. al. "Unconstrained Face Detection and Open-Set Face Recognition Challenge," 2018. Arxiv 1708.02337v3.<a href="#fnref-funding_uccs" class="footnote">↩</a></p></li> +</ol> </div> </section> diff --git a/site/public/datasets/viper/index.html b/site/public/datasets/viper/index.html index cbd866f4..1de17f57 100644 --- a/site/public/datasets/viper/index.html +++ b/site/public/datasets/viper/index.html @@ -26,7 +26,7 @@ </header> <div class="content content-dataset"> - <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/viper/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span style='color: #ffaa00'>VIPeR</span> is a person re-identification dataset of images captured at UC Santa Cruz in 2007</span></div><div class='hero_subdesc'><span class='bgpad'>VIPeR contains 1,264 images and 632 persons on the UC Santa Cruz campus and is used to train person re-identification algorithms for surveillance + <section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/viper/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">VIPeR</span> is a person re-identification dataset of images captured at UC Santa Cruz in 2007</span></div><div class='hero_subdesc'><span class='bgpad'>VIPeR contains 1,264 images and 632 persons on the UC Santa Cruz campus and is used to train person re-identification algorithms for surveillance </span></div></div></section><section><div class='left-sidebar'><div class='meta'><div><div class='gray'>Published</div><div>2007</div></div><div><div class='gray'>Images</div><div>1,264</div></div><div><div class='gray'>Persons</div><div>632</div></div><div><div class='gray'>Created by</div><div>UC Santa Cruz</div></div></div></div><h2>VIPeR Dataset</h2> <p>(PAGE UNDER DEVELOPMENT)</p> <p><em>VIPeR (Viewpoint Invariant Pedestrian Recognition)</em> is a dataset of pedestrian images captured at University of California Santa Cruz in 2007. Accoriding to the reserachers 2 "cameras were placed in different locations in an academic setting and subjects were notified of the presence of cameras, but were not coached or instructed in any way."</p> @@ -45,7 +45,7 @@ <div class="applet" data-payload="{"command": "chart"}"></div> </section><section> - <h3>Biometric Trade Routes (beta)</h3> + <h3>Information Supply Chain</h3> <!-- <div class="map-sidebar right-sidebar"> <h3>Legend</h3> @@ -58,7 +58,7 @@ --> <p> To understand how VIPeR has been used around the world... - affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast + affected global research on computer vision, surveillance, defense, and consumer technology, the and where this dataset has been used the locations of each organization that used or referenced the datast </p> </section> @@ -76,7 +76,7 @@ <section> <p class='subp'> - The data is generated by collecting all citations for all original research papers associated with the dataset. Then the PDFs are then converted to text and the organization names are extracted and geocoded. Because of the automated approach to extracting data, actual use of the dataset can not yet be confirmed. This visualization is provided to help locate and confirm usage and will be updated as data noise is reduced. + Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. </p> </section><section> |
