summaryrefslogtreecommitdiff
path: root/site/public/datasets
diff options
context:
space:
mode:
Diffstat (limited to 'site/public/datasets')
-rw-r--r--site/public/datasets/50_people_one_question/index.html6
-rw-r--r--site/public/datasets/afad/index.html6
-rw-r--r--site/public/datasets/brainwash/index.html6
-rw-r--r--site/public/datasets/caltech_10k/index.html6
-rw-r--r--site/public/datasets/celeba/index.html6
-rw-r--r--site/public/datasets/cofw/index.html6
-rw-r--r--site/public/datasets/duke_mtmc/index.html15
-rw-r--r--site/public/datasets/feret/index.html62
-rw-r--r--site/public/datasets/hrt_transgender/index.html6
-rw-r--r--site/public/datasets/index.html24
-rw-r--r--site/public/datasets/lfpw/index.html6
-rw-r--r--site/public/datasets/lfw/index.html6
-rw-r--r--site/public/datasets/market_1501/index.html6
-rw-r--r--site/public/datasets/msceleb/index.html6
-rw-r--r--site/public/datasets/oxford_town_centre/index.html6
-rw-r--r--site/public/datasets/pipa/index.html6
-rw-r--r--site/public/datasets/pubfig/index.html6
-rw-r--r--site/public/datasets/uccs/index.html4
-rw-r--r--site/public/datasets/vgg_face2/index.html6
-rw-r--r--site/public/datasets/viper/index.html6
-rw-r--r--site/public/datasets/youtube_celebrities/index.html4
21 files changed, 142 insertions, 63 deletions
diff --git a/site/public/datasets/50_people_one_question/index.html b/site/public/datasets/50_people_one_question/index.html
index dfd8cbff..76d22562 100644
--- a/site/public/datasets/50_people_one_question/index.html
+++ b/site/public/datasets/50_people_one_question/index.html
@@ -27,7 +27,8 @@
<div class="content content-dataset">
<section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/50_people_one_question/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span style="color:#ffaa00">People One Question</span> is a dataset of people from an online video series on YouTube and Vimeo used for building facial recogntion algorithms</span></div><div class='hero_subdesc'><span class='bgpad'>People One Question dataset includes ...
-</span></div></div></section><section><div class='right-sidebar'><div class='meta'>
+</span></div></div></section><section><h2>50 People 1 Question</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2013</div>
</div><div class='meta'>
@@ -39,8 +40,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.vision.caltech.edu/~dhall/projects/MergingPoseEstimates/' target='_blank' rel='nofollow noopener'>caltech.edu</a></div>
- </div></div><h2>50 People 1 Question</h2>
-<p>[ page under development ]</p>
+ </div></div><section><p>[ page under development ]</p>
</section><section>
<h3>Who used 50 People One Question Dataset?</h3>
diff --git a/site/public/datasets/afad/index.html b/site/public/datasets/afad/index.html
index df14e7cd..832ce86a 100644
--- a/site/public/datasets/afad/index.html
+++ b/site/public/datasets/afad/index.html
@@ -26,7 +26,8 @@
</header>
<div class="content content-">
- <section><div class='right-sidebar'><div class='meta'>
+ <section><h2>Asian Face Age Dataset</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2017</div>
</div><div class='meta'>
@@ -41,8 +42,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='https://afad-dataset.github.io/' target='_blank' rel='nofollow noopener'>github.io</a></div>
- </div></div><h2>Asian Face Age Dataset</h2>
-<p>[ page under development ]</p>
+ </div></div><section><p>[ page under development ]</p>
</section><section>
<h3>Who used Asian Face Age Dataset?</h3>
diff --git a/site/public/datasets/brainwash/index.html b/site/public/datasets/brainwash/index.html
index 03331a2d..494856ec 100644
--- a/site/public/datasets/brainwash/index.html
+++ b/site/public/datasets/brainwash/index.html
@@ -27,7 +27,8 @@
<div class="content content-dataset">
<section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/brainwash/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Brainwash is a dataset of webcam images taken from the Brainwash Cafe in San Francisco in 2014</span></div><div class='hero_subdesc'><span class='bgpad'>The Brainwash dataset includes 11,918 images of "everyday life of a busy downtown cafe" and is used for training head detection surveillance algorithms
-</span></div></div></section><section><div class='right-sidebar'><div class='meta'>
+</span></div></div></section><section><h2>Brainwash Dataset</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2015</div>
</div><div class='meta'>
@@ -48,8 +49,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='https://purl.stanford.edu/sx925dc9385' target='_blank' rel='nofollow noopener'>stanford.edu</a></div>
- </div></div><h2>Brainwash Dataset</h2>
-<p><em>Brainwash</em> is a head detection dataset created from San Francisco's Brainwash Cafe livecam footage. It includes 11,918 images of "everyday life of a busy downtown cafe"<a class="footnote_shim" name="[^readme]_1"> </a><a href="#[^readme]" class="footnote" title="Footnote 1">1</a> captured at 100 second intervals throught the entire day. Brainwash dataset was captured during 3 days in 2014: October 27, November 13, and November 24. According the author's reserach paper introducing the dataset, the images were acquired with the help of Angelcam.com.<a class="footnote_shim" name="[^end_to_end]_1"> </a><a href="#[^end_to_end]" class="footnote" title="Footnote 2">2</a></p>
+ </div></div><section><p><em>Brainwash</em> is a head detection dataset created from San Francisco's Brainwash Cafe livecam footage. It includes 11,918 images of "everyday life of a busy downtown cafe"<a class="footnote_shim" name="[^readme]_1"> </a><a href="#[^readme]" class="footnote" title="Footnote 1">1</a> captured at 100 second intervals throught the entire day. Brainwash dataset was captured during 3 days in 2014: October 27, November 13, and November 24. According the author's reserach paper introducing the dataset, the images were acquired with the help of Angelcam.com.<a class="footnote_shim" name="[^end_to_end]_1"> </a><a href="#[^end_to_end]" class="footnote" title="Footnote 2">2</a></p>
<p>Brainwash is not a widely used dataset but since its publication by Stanford University in 2015, it has notably appeared in several research papers from the National University of Defense Technology in Changsha, China. In 2016 and in 2017 researchers there conducted studies on detecting people's heads in crowded scenes for the purpose of surveillance. <a class="footnote_shim" name="[^localized_region_context]_1"> </a><a href="#[^localized_region_context]" class="footnote" title="Footnote 3">3</a> <a class="footnote_shim" name="[^replacement_algorithm]_1"> </a><a href="#[^replacement_algorithm]" class="footnote" title="Footnote 4">4</a></p>
<p>If you happen to have been at Brainwash cafe in San Francisco at any time on October 26, November 13, or November 24 in 2014 you are most likely included in the Brainwash dataset and have unwittingly contributed to surveillance research.</p>
</section><section>
diff --git a/site/public/datasets/caltech_10k/index.html b/site/public/datasets/caltech_10k/index.html
index 00b5e7fd..c7b9f894 100644
--- a/site/public/datasets/caltech_10k/index.html
+++ b/site/public/datasets/caltech_10k/index.html
@@ -26,7 +26,8 @@
</header>
<div class="content content-">
- <section><div class='right-sidebar'><div class='meta'>
+ <section><h2>Caltech 10K Faces Dataset</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2015</div>
</div><div class='meta'>
@@ -47,8 +48,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='https://purl.stanford.edu/sx925dc9385' target='_blank' rel='nofollow noopener'>stanford.edu</a></div>
- </div></div><h2>Caltech 10K Faces Dataset</h2>
-<p>[ page under development ]</p>
+ </div></div><section><p>[ page under development ]</p>
</section><section>
<h3>Who used Brainwash Dataset?</h3>
diff --git a/site/public/datasets/celeba/index.html b/site/public/datasets/celeba/index.html
index c4caef20..e42ceb6f 100644
--- a/site/public/datasets/celeba/index.html
+++ b/site/public/datasets/celeba/index.html
@@ -27,7 +27,8 @@
<div class="content content-dataset">
<section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/celeba/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span style="color:#ffaa00">CelebA</span> is a dataset of people...</span></div><div class='hero_subdesc'><span class='bgpad'>CelebA includes...
-</span></div></div></section><section><div class='right-sidebar'><div class='meta'>
+</span></div></div></section><section><h2>CelebA Dataset</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2015</div>
</div><div class='meta'>
@@ -45,8 +46,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html' target='_blank' rel='nofollow noopener'>edu.hk</a></div>
- </div></div><h2>CelebA Dataset</h2>
-<p>[ PAGE UNDER DEVELOPMENT ]</p>
+ </div></div><section><p>[ PAGE UNDER DEVELOPMENT ]</p>
</section><section>
<h3>Who used CelebA Dataset?</h3>
diff --git a/site/public/datasets/cofw/index.html b/site/public/datasets/cofw/index.html
index 4851e256..39e9680b 100644
--- a/site/public/datasets/cofw/index.html
+++ b/site/public/datasets/cofw/index.html
@@ -26,7 +26,8 @@
</header>
<div class="content content-">
- <section><div class='right-sidebar'><div class='meta'>
+ <section><h2>Caltech Occluded Faces in the Wild</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2013</div>
</div><div class='meta'>
@@ -38,8 +39,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.vision.caltech.edu/xpburgos/ICCV13/' target='_blank' rel='nofollow noopener'>caltech.edu</a></div>
- </div></div><h2>Caltech Occluded Faces in the Wild</h2>
-<p>[ PAGE UNDER DEVELOPMENT ]</p>
+ </div></div><section><p>[ PAGE UNDER DEVELOPMENT ]</p>
</section><section>
<h3>Who used COFW Dataset?</h3>
diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html
index ba32484a..78067101 100644
--- a/site/public/datasets/duke_mtmc/index.html
+++ b/site/public/datasets/duke_mtmc/index.html
@@ -27,7 +27,8 @@
<div class="content content-dataset">
<section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Duke MTMC</span> is a dataset of surveillance camera footage of students on Duke University campus</span></div><div class='hero_subdesc'><span class='bgpad'>Duke MTMC contains over 2 million video frames and 2,700 unique identities collected from 8 HD cameras at Duke University campus in March 2014
-</span></div></div></section><section><div class='right-sidebar'><div class='meta'>
+</span></div></div></section><section><h2>Duke MTMC</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2016</div>
</div><div class='meta'>
@@ -45,12 +46,12 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://vision.cs.duke.edu/DukeMTMC/' target='_blank' rel='nofollow noopener'>duke.edu</a></div>
- </div></div><h2>Duke MTMC</h2>
+ </div></div><section><p>[ page under development ]</p>
<p>Duke MTMC (Multi-Target, Multi-Camera Tracking) is a dataset of video recorded on Duke University campus for research and development of networked camera surveillance systems. MTMC tracking algorithms are used for citywide dragnet surveillance systems such as those used throughout China by SenseTime<a class="footnote_shim" name="[^sensetime_qz]_1"> </a><a href="#[^sensetime_qz]" class="footnote" title="Footnote 1">1</a> and the oppressive monitoring of 2.5 million Uyghurs in Xinjiang by SenseNets<a class="footnote_shim" name="[^sensenets_uyghurs]_1"> </a><a href="#[^sensenets_uyghurs]" class="footnote" title="Footnote 2">2</a>. In fact researchers from both SenseTime<a class="footnote_shim" name="[^sensetime1]_1"> </a><a href="#[^sensetime1]" class="footnote" title="Footnote 4">4</a> <a class="footnote_shim" name="[^sensetime2]_1"> </a><a href="#[^sensetime2]" class="footnote" title="Footnote 5">5</a> and SenseNets<a class="footnote_shim" name="[^sensenets_sensetime]_1"> </a><a href="#[^sensenets_sensetime]" class="footnote" title="Footnote 3">3</a> used the Duke MTMC dataset for their research.</p>
-<p>In this investigation into the Duke MTMC dataset, we found that researchers at Duke Univesity in Durham, North Carolina captured over 2,000 students, faculty members, and passersby into one of the most prolific public surveillance research datasets that's used around the world by commercial and defense surveillance organizations.</p>
+<p>In this investigation into the Duke MTMC dataset, we found that researchers at Duke University in Durham, North Carolina captured over 2,000 students, faculty members, and passersby into one of the most prolific public surveillance research datasets that's used around the world by commercial and defense surveillance organizations.</p>
<p>Since it's publication in 2016, the Duke MTMC dataset has been used in over 100 studies at organizations around the world including SenseTime<a class="footnote_shim" name="[^sensetime1]_2"> </a><a href="#[^sensetime1]" class="footnote" title="Footnote 4">4</a> <a class="footnote_shim" name="[^sensetime2]_2"> </a><a href="#[^sensetime2]" class="footnote" title="Footnote 5">5</a>, SenseNets<a class="footnote_shim" name="[^sensenets_sensetime]_2"> </a><a href="#[^sensenets_sensetime]" class="footnote" title="Footnote 3">3</a>, IARPA and IBM<a class="footnote_shim" name="[^iarpa_ibm]_1"> </a><a href="#[^iarpa_ibm]" class="footnote" title="Footnote 9">9</a>, Chinese National University of Defense <a class="footnote_shim" name="[^cn_defense1]_1"> </a><a href="#[^cn_defense1]" class="footnote" title="Footnote 7">7</a><a class="footnote_shim" name="[^cn_defense2]_1"> </a><a href="#[^cn_defense2]" class="footnote" title="Footnote 8">8</a>, US Department of Homeland Security<a class="footnote_shim" name="[^us_dhs]_1"> </a><a href="#[^us_dhs]" class="footnote" title="Footnote 10">10</a>, Tencent, Microsoft, Microsft Asia, Fraunhofer, Senstar Corp., Alibaba, Naver Labs, Google and Hewlett-Packard Labs to name only a few.</p>
<p>The creation and publication of the Duke MTMC dataset in 2014 (published in 2016) was originally funded by the U.S. Army Research Laboratory and the National Science Foundation<a class="footnote_shim" name="[^duke_mtmc_orig]_1"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 6">6</a>. Though our analysis of the geographic locations of the publicly available research shows over twice as many citations by researchers from China (44% China, 20% United States). In 2018 alone, there were 70 research project citations from China.</p>
-</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_reid_montage.jpg' alt=' A collection of 1,600 out of the 2,700 students and passersby captured into the Duke MTMC surveillance research and development dataset on . These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification. Open Data Commons Attribution License.'><div class='caption'> A collection of 1,600 out of the 2,700 students and passersby captured into the Duke MTMC surveillance research and development dataset on . These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification. Open Data Commons Attribution License.</div></div></section><section><p>The 8 cameras deployed on Duke's campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".<a class="footnote_shim" name="[^duke_mtmc_orig]_2"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 6">6</a>. Camera 5 was positioned to capture students as entering and exiting the university's main chapel. Each camera's location and approximate field of view. The heat map visualization shows the locations where pedestrians were most frequently annotated in each video from the Duke MTMC datset.</p>
+</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_reid_montage.jpg' alt=' A collection of 1,600 out of the 2,700 students and passersby captured into the Duke MTMC surveillance research and development dataset on . These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification. Open Data Commons Attribution License.'><div class='caption'> A collection of 1,600 out of the 2,700 students and passersby captured into the Duke MTMC surveillance research and development dataset on . These students were also included in the Duke MTMC Re-ID dataset extension used for person re-identification. Open Data Commons Attribution License.</div></div></section><section><p>The 8 cameras deployed on Duke's campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".<a class="footnote_shim" name="[^duke_mtmc_orig]_2"> </a><a href="#[^duke_mtmc_orig]" class="footnote" title="Footnote 6">6</a>. Camera 5 was positioned to capture students as entering and exiting the university's main chapel. Each camera's location and approximate field of view. The heat map visualization shows the locations where pedestrians were most frequently annotated in each video from the Duke MTMC dataset.</p>
</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_camera_map.jpg' alt=' Duke MTMC camera locations on Duke University campus. Open Data Commons Attribution License.'><div class='caption'> Duke MTMC camera locations on Duke University campus. Open Data Commons Attribution License.</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_cameras.jpg' alt=' Duke MTMC camera views for 8 cameras deployed on campus &copy; megapixels.cc'><div class='caption'> Duke MTMC camera views for 8 cameras deployed on campus &copy; megapixels.cc</div></div></section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_saliencies.jpg' alt=' Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus &copy; megapixels.cc'><div class='caption'> Duke MTMC pedestrian detection saliency maps for 8 cameras deployed on campus &copy; megapixels.cc</div></div></section><section>
<h3>Who used Duke MTMC Dataset?</h3>
@@ -217,7 +218,11 @@ under Grants IIS-10-17017 and IIS-14-20894.</p>
booktitle = {European Conference on Computer Vision workshop on Benchmarking Multi-Target Tracking},
year = {2016}
}
-</pre></section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^sensetime_qz]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime_qz]_1">a</a></span><p><a href="https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/">https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/</a></p>
+</pre><h4>ToDo</h4>
+<ul>
+<li>clean up citations, formatting</li>
+</ul>
+</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^sensetime_qz]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime_qz]_1">a</a></span><p><a href="https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/">https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/</a></p>
</li><li><a name="[^sensenets_uyghurs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_uyghurs]_1">a</a></span><p><a href="https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/">https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/</a></p>
</li><li><a name="[^sensenets_sensetime]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_sensetime]_1">a</a><a href="#[^sensenets_sensetime]_2">b</a></span><p>"Attention-Aware Compositional Network for Person Re-identification". 2018. <a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">SemanticScholar</a>, <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_Attention-Aware_Compositional_Network_CVPR_2018_paper.pdf">PDF</a></p>
</li><li><a name="[^sensetime1]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime1]_1">a</a><a href="#[^sensetime1]_2">b</a></span><p>"End-to-End Deep Kronecker-Product Matching for Person Re-identification". 2018. <a href="https://www.semanticscholar.org/paper/End-to-End-Deep-Kronecker-Product-Matching-for-Shen-Xiao/947954cafdefd471b75da8c3bb4c21b9e6d57838">SemanticScholar</a>, <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Shen_End-to-End_Deep_Kronecker-Product_CVPR_2018_paper.pdf">PDF</a></p>
diff --git a/site/public/datasets/feret/index.html b/site/public/datasets/feret/index.html
index 089cd351..929041df 100644
--- a/site/public/datasets/feret/index.html
+++ b/site/public/datasets/feret/index.html
@@ -26,7 +26,8 @@
</header>
<div class="content content-">
- <section><div class='right-sidebar'><div class='meta'>
+ <section><h1>FacE REcognition Dataset (FERET)</h1>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2007</div>
</div><div class='meta'>
@@ -41,10 +42,59 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://vis-www.cs.umass.edu/lfw/' target='_blank' rel='nofollow noopener'>umass.edu</a></div>
- </div><h1>FacE REcognition Dataset (FERET)</h1>
-<p>[ page under development ]</p>
-<p>{% include 'dashboard.html' %}</p>
-<h3>(ignore) RESEARCH below this line</h3>
+ </div></div><section><p>[ page under development ]</p>
+</section><section>
+ <h3>Who used LFW?</h3>
+
+ <p>
+ This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries.
+ </p>
+
+ </section>
+
+<section class="applet_container">
+<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span>
+</div> -->
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;chart&quot;}"></div>
+</section>
+
+<section class="applet_container">
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;piechart&quot;}"></div>
+</section>
+
+<section>
+
+ <h3>Biometric Trade Routes</h3>
+
+ <p>
+ To help understand how LFW has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Labeled Faces in the Wild was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location.
+ </p>
+
+ </section>
+
+<section class="applet_container fullwidth">
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;map&quot;}"></div>
+</section>
+
+<div class="caption">
+ <ul class="map-legend">
+ <li class="edu">Academic</li>
+ <li class="com">Commercial</li>
+ <li class="gov">Military / Government</li>
+ </ul>
+ <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div >
+</div>
+
+
+<section class="applet_container">
+
+ <h3>Dataset Citations</h3>
+ <p>
+ The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms.
+ </p>
+
+ <div class="applet" data-payload="{&quot;command&quot;: &quot;citations&quot;}"></div>
+</section><section><h3>(ignore) RESEARCH below this line</h3>
<ul>
<li>Years: 1993-1996</li>
<li>Images: 14,126</li>
@@ -63,7 +113,7 @@
<ul>
<li>"A release form is necessary because of the privacy laws in the United States."</li>
</ul>
-</div><h2>Funding</h2>
+<h2>Funding</h2>
<p>The FERET program is sponsored by the U.S. Depart- ment of Defense’s Counterdrug Technology Development Program Office. The U.S. Army Research Laboratory (ARL) is the technical agent for the FERET program. ARL designed, administered, and scored the FERET tests. George Mason University collected, processed, and main- tained the FERET database. Inquiries regarding the FERET database or test should be directed to P. Jonathon Phillips.</p>
</section>
diff --git a/site/public/datasets/hrt_transgender/index.html b/site/public/datasets/hrt_transgender/index.html
index 231a5271..5f2229d8 100644
--- a/site/public/datasets/hrt_transgender/index.html
+++ b/site/public/datasets/hrt_transgender/index.html
@@ -27,7 +27,8 @@
<div class="content content-dataset">
<section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/hrt_transgender/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>TBD</span></div><div class='hero_subdesc'><span class='bgpad'>TBD
-</span></div></div></section><section><div class='right-sidebar'><div class='meta'>
+</span></div></div></section><section><h2>HRT Transgender Dataset</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2013</div>
</div><div class='meta'>
@@ -42,8 +43,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.faceaginggroup.com/hrt-transgender/' target='_blank' rel='nofollow noopener'>faceaginggroup.com</a></div>
- </div></div><h2>HRT Transgender Dataset</h2>
-<p>[ page under development ]</p>
+ </div></div><section><p>[ page under development ]</p>
</section><section><p>{% include 'dashboard.html' }</p>
</section>
diff --git a/site/public/datasets/index.html b/site/public/datasets/index.html
index b01c1ac1..75961089 100644
--- a/site/public/datasets/index.html
+++ b/site/public/datasets/index.html
@@ -61,6 +61,30 @@
</div>
</a>
+ <a href="/datasets/hrt_transgender/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/hrt_transgender/assets/index.jpg)">
+ <div class="dataset">
+ <span class='title'>HRT Transgender Dataset</span>
+ <div class='fields'>
+ <div class='year visible'><span>2013</span></div>
+ <div class='purpose'><span>gender transition and facial recognition</span></div>
+ <div class='images'><span>10,564 images</span></div>
+ <div class='identities'><span>38 </span></div>
+ </div>
+ </div>
+ </a>
+
+ <a href="/datasets/msceleb/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/index.jpg)">
+ <div class="dataset">
+ <span class='title'>Microsoft Celeb</span>
+ <div class='fields'>
+ <div class='year visible'><span>2016</span></div>
+ <div class='purpose'><span>Large-scale face recognition</span></div>
+ <div class='images'><span>1,000,000 images</span></div>
+ <div class='identities'><span>100,000 </span></div>
+ </div>
+ </div>
+ </a>
+
<a href="/datasets/oxford_town_centre/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/index.jpg)">
<div class="dataset">
<span class='title'>Oxford Town Centre</span>
diff --git a/site/public/datasets/lfpw/index.html b/site/public/datasets/lfpw/index.html
index f734d332..c26b8583 100644
--- a/site/public/datasets/lfpw/index.html
+++ b/site/public/datasets/lfpw/index.html
@@ -26,7 +26,8 @@
</header>
<div class="content content-">
- <section><div class='right-sidebar'><div class='meta'>
+ <section><h2>Labeled Face Parts in The Wild</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2011</div>
</div><div class='meta'>
@@ -35,8 +36,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://neerajkumar.org/databases/lfpw/' target='_blank' rel='nofollow noopener'>neerajkumar.org</a></div>
- </div></div><h2>Labeled Face Parts in The Wild</h2>
-</section><section>
+ </div></div><section>
<h3>Who used LFWP?</h3>
<p>
diff --git a/site/public/datasets/lfw/index.html b/site/public/datasets/lfw/index.html
index 60a6bf0e..1907f959 100644
--- a/site/public/datasets/lfw/index.html
+++ b/site/public/datasets/lfw/index.html
@@ -27,7 +27,8 @@
<div class="content content-">
<section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/lfw/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Labeled Faces in The Wild (LFW)</span> is the first facial recognition dataset created entirely from online photos</span></div><div class='hero_subdesc'><span class='bgpad'>It includes 13,456 images of 4,432 people's images copied from the Internet during 2002-2004 and is the most frequently used dataset in the world for benchmarking face recognition algorithms.
-</span></div></div></section><section><div class='left-sidebar'><div class='meta'>
+</span></div></div></section><section><h2>Labeled Faces in the Wild</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2007</div>
</div><div class='meta'>
@@ -42,8 +43,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://vis-www.cs.umass.edu/lfw/' target='_blank' rel='nofollow noopener'>umass.edu</a></div>
- </div></div><h2>Labeled Faces in the Wild</h2>
-<p>[ PAGE UNDER DEVELOPMENT ]</p>
+ </div></div><section><p>[ PAGE UNDER DEVELOPMENT ]</p>
<p><em>Labeled Faces in The Wild</em> (LFW) is "a database of face photographs designed for studying the problem of unconstrained face recognition<a class="footnote_shim" name="[^lfw_www]_1"> </a><a href="#[^lfw_www]" class="footnote" title="Footnote 1">1</a>. It is used to evaluate and improve the performance of facial recognition algorithms in academic, commercial, and government research. According to BiometricUpdate.com<a class="footnote_shim" name="[^lfw_pingan]_1"> </a><a href="#[^lfw_pingan]" class="footnote" title="Footnote 3">3</a>, LFW is "the most widely used evaluation set in the field of facial recognition, LFW attracts a few dozen teams from around the globe including Google, Facebook, Microsoft Research Asia, Baidu, Tencent, SenseTime, Face++ and Chinese University of Hong Kong."</p>
<p>The LFW dataset includes 13,233 images of 5,749 people that were collected between 2002-2004. LFW is a subset of <em>Names of Faces</em> and is part of the first facial recognition training dataset created entirely from images appearing on the Internet. The people appearing in LFW are...</p>
<p>The <em>Names and Faces</em> dataset was the first face recognition dataset created entire from online photos. However, <em>Names and Faces</em> and <em>LFW</em> are not the first face recognition dataset created entirely "in the wild". That title belongs to the <a href="/datasets/ucd_faces/">UCD dataset</a>. Images obtained "in the wild" means using an image without explicit consent or awareness from the subject or photographer.</p>
diff --git a/site/public/datasets/market_1501/index.html b/site/public/datasets/market_1501/index.html
index 72807efc..ad6bf458 100644
--- a/site/public/datasets/market_1501/index.html
+++ b/site/public/datasets/market_1501/index.html
@@ -27,7 +27,8 @@
<div class="content content-dataset">
<section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/market_1501/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">Market-1501</span> is a dataset is collection of CCTV footage from Tsinghua University</span></div><div class='hero_subdesc'><span class='bgpad'>The Market-1501 dataset includes 1,261 people from 5 HD surveillance cameras located on campus
-</span></div></div></section><section><div class='right-sidebar'><div class='meta'>
+</span></div></div></section><section><h2>Market-1501 Dataset</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2015</div>
</div><div class='meta'>
@@ -42,8 +43,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.liangzheng.org/Project/project_reid.html' target='_blank' rel='nofollow noopener'>liangzheng.org</a></div>
- </div></div><h2>Market-1501 Dataset</h2>
-<p>[ PAGE UNDER DEVELOPMENT]</p>
+ </div></div><section><p>[ PAGE UNDER DEVELOPMENT]</p>
</section><section>
<h3>Who used Market 1501?</h3>
diff --git a/site/public/datasets/msceleb/index.html b/site/public/datasets/msceleb/index.html
index be21280c..b4d02c87 100644
--- a/site/public/datasets/msceleb/index.html
+++ b/site/public/datasets/msceleb/index.html
@@ -27,7 +27,8 @@
<div class="content content-dataset">
<section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>MS Celeb is a dataset of web images used for training and evaluating face recognition algorithms</span></div><div class='hero_subdesc'><span class='bgpad'>The MS Celeb dataset includes over 10,000,000 images and 93,000 identities of semi-public figures collected using the Bing search engine
-</span></div></div></section><section><div class='right-sidebar'><div class='meta'>
+</span></div></div></section><section><h2>Microsoft Celeb Dataset (MS Celeb)</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2016</div>
</div><div class='meta'>
@@ -48,8 +49,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.msceleb.org/' target='_blank' rel='nofollow noopener'>msceleb.org</a></div>
- </div></div><h2>Microsoft Celeb Dataset (MS Celeb)</h2>
-<p>[ PAGE UNDER DEVELOPMENT ]</p>
+ </div></div><section><p>[ PAGE UNDER DEVELOPMENT ]</p>
<p><a href="https://www.hrw.org/news/2019/01/15/letter-microsoft-face-surveillance-technology">https://www.hrw.org/news/2019/01/15/letter-microsoft-face-surveillance-technology</a></p>
<p><a href="https://www.scmp.com/tech/science-research/article/3005733/what-you-need-know-about-sensenets-facial-recognition-firm">https://www.scmp.com/tech/science-research/article/3005733/what-you-need-know-about-sensenets-facial-recognition-firm</a></p>
</section><section>
diff --git a/site/public/datasets/oxford_town_centre/index.html b/site/public/datasets/oxford_town_centre/index.html
index af020855..8c95f287 100644
--- a/site/public/datasets/oxford_town_centre/index.html
+++ b/site/public/datasets/oxford_town_centre/index.html
@@ -27,7 +27,8 @@
<div class="content content-dataset">
<section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Oxford Town Centre is a dataset of surveillance camera footage from Cornmarket St Oxford, England</span></div><div class='hero_subdesc'><span class='bgpad'>The Oxford Town Centre dataset includes approximately 2,200 identities and is used for research and development of face recognition systems
-</span></div></div></section><section><div class='right-sidebar'><div class='meta'>
+</span></div></div></section><section><h2>Oxford Town Centre</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2009</div>
</div><div class='meta'>
@@ -48,8 +49,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html' target='_blank' rel='nofollow noopener'>ox.ac.uk</a></div>
- </div></div><h2>Oxford Town Centre</h2>
-<p>The Oxford Town Centre dataset is a CCTV video of pedestrians in a busy downtown area in Oxford used for research and development of activity and face recognition systems.<a class="footnote_shim" name="[^ben_benfold_orig]_1"> </a><a href="#[^ben_benfold_orig]" class="footnote" title="Footnote 1">1</a> The CCTV video was obtained from a public surveillance camera at the corner of Cornmarket and Market St. in Oxford, England and includes approximately 2,200 people. Since its publication in 2009<a class="footnote_shim" name="[^guiding_surveillance]_1"> </a><a href="#[^guiding_surveillance]" class="footnote" title="Footnote 2">2</a> the Oxford Town Centre dataset has been used in over 80 verified research projects including commercial research by Amazon, Disney, OSRAM, and Huawei; and academic research in China, Israel, Russia, Singapore, the US, and Germany among dozens more.</p>
+ </div></div><section><p>The Oxford Town Centre dataset is a CCTV video of pedestrians in a busy downtown area in Oxford used for research and development of activity and face recognition systems.<a class="footnote_shim" name="[^ben_benfold_orig]_1"> </a><a href="#[^ben_benfold_orig]" class="footnote" title="Footnote 1">1</a> The CCTV video was obtained from a public surveillance camera at the corner of Cornmarket and Market St. in Oxford, England and includes approximately 2,200 people. Since its publication in 2009<a class="footnote_shim" name="[^guiding_surveillance]_1"> </a><a href="#[^guiding_surveillance]" class="footnote" title="Footnote 2">2</a> the Oxford Town Centre dataset has been used in over 80 verified research projects including commercial research by Amazon, Disney, OSRAM, and Huawei; and academic research in China, Israel, Russia, Singapore, the US, and Germany among dozens more.</p>
<p>The Oxford Town Centre dataset is unique in that it uses footage from a public surveillance camera that would otherwise be designated for public safety. The video shows that the pedestrians act normally and unrehearsed indicating they neither knew of or consented to participation in the research project.</p>
</section><section>
<h3>Who used TownCentre?</h3>
diff --git a/site/public/datasets/pipa/index.html b/site/public/datasets/pipa/index.html
index 780b3029..d02540f0 100644
--- a/site/public/datasets/pipa/index.html
+++ b/site/public/datasets/pipa/index.html
@@ -27,7 +27,8 @@
<div class="content content-dataset">
<section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/pipa/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name"> People in Photo Albums (PIPA)</span> is a dataset...</span></div><div class='hero_subdesc'><span class='bgpad'>[ add subdescrition ]
-</span></div></div></section><section><div class='right-sidebar'><div class='meta'>
+</span></div></div></section><section><h2>People in Photo Albums</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2015</div>
</div><div class='meta'>
@@ -45,8 +46,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='https://people.eecs.berkeley.edu/~nzhang/piper.html' target='_blank' rel='nofollow noopener'>berkeley.edu</a></div>
- </div></div><h2>People in Photo Albums</h2>
-<p>[ PAGE UNDER DEVELOPMENT ]</p>
+ </div></div><section><p>[ PAGE UNDER DEVELOPMENT ]</p>
</section><section>
<h3>Who used PIPA Dataset?</h3>
diff --git a/site/public/datasets/pubfig/index.html b/site/public/datasets/pubfig/index.html
index 2c8bd7b1..ed593054 100644
--- a/site/public/datasets/pubfig/index.html
+++ b/site/public/datasets/pubfig/index.html
@@ -27,7 +27,8 @@
<div class="content content-dataset">
<section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/pubfig/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">PubFig</span> is a dataset...</span></div><div class='hero_subdesc'><span class='bgpad'>[ add subdescrition ]
-</span></div></div></section><section><div class='right-sidebar'><div class='meta'>
+</span></div></div></section><section><h2>PubFig</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2009</div>
</div><div class='meta'>
@@ -42,8 +43,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.cs.columbia.edu/CAVE/databases/pubfig/' target='_blank' rel='nofollow noopener'>columbia.edu</a></div>
- </div></div><h2>PubFig</h2>
-<p>[ PAGE UNDER DEVELOPMENT ]</p>
+ </div></div><section><p>[ PAGE UNDER DEVELOPMENT ]</p>
</section><section>
<h3>Who used PubFig?</h3>
diff --git a/site/public/datasets/uccs/index.html b/site/public/datasets/uccs/index.html
index 1d76de3a..27d30716 100644
--- a/site/public/datasets/uccs/index.html
+++ b/site/public/datasets/uccs/index.html
@@ -28,7 +28,7 @@
<section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">UnConstrained College Students</span> is a dataset of long-range surveillance photos of students on University of Colorado in Colorado Springs campus</span></div><div class='hero_subdesc'><span class='bgpad'>The UnConstrained College Students dataset includes 16,149 images of 1,732 students, faculty, and pedestrians and is used for developing face recognition and face detection algorithms
</span></div></div></section><section><h2>UnConstrained College Students</h2>
-</section><section><div class='right-sidebar'><div class='meta'>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2016</div>
</div><div class='meta'>
@@ -49,7 +49,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://vast.uccs.edu/Opensetface/' target='_blank' rel='nofollow noopener'>uccs.edu</a></div>
- </div></div><p>UnConstrained College Students (UCCS) is a dataset of long-range surveillance photos captured at University of Colorado Colorado Springs. According to the authors of two papers associated with the dataset, over 1,700 students and pedestrians were "photographed using a long-range high-resolution surveillance camera without their knowledge" <a class="footnote_shim" name="[^funding_uccs]_1"> </a><a href="#[^funding_uccs]" class="footnote" title="Footnote 2">2</a>. In this investigation, we examine the funding sources, contents of the dataset, photo EXIF data, and publicy available research project citations.</p>
+ </div></div><section><p>UnConstrained College Students (UCCS) is a dataset of long-range surveillance photos captured at University of Colorado Colorado Springs. According to the authors of two papers associated with the dataset, over 1,700 students and pedestrians were "photographed using a long-range high-resolution surveillance camera without their knowledge" <a class="footnote_shim" name="[^funding_uccs]_1"> </a><a href="#[^funding_uccs]" class="footnote" title="Footnote 2">2</a>. In this investigation, we examine the funding sources, contents of the dataset, photo EXIF data, and publicy available research project citations.</p>
<p>According to the author's of the the UnConstrained College Students dataset it is primarliy used for research and development of "face detection and recognition research towards surveillance applications that are becoming more popular and more required nowadays, and where no automatic recognition algorithm has proven to be useful yet." Applications of this technology include usage by defense and intelligence agencies, who were also the primary funding sources of the UCCS dataset.</p>
<p>In the two papers associated with the release of the UCCS dataset (<a href="https://www.semanticscholar.org/paper/Unconstrained-Face-Detection-and-Open-Set-Face-G%C3%BCnther-Hu/d4f1eb008eb80595bcfdac368e23ae9754e1e745">Unconstrained Face Detection and Open-Set Face Recognition Challenge</a> and <a href="https://www.semanticscholar.org/paper/Large-scale-unconstrained-open-set-face-database-Sapkota-Boult/07fcbae86f7a3ad3ea1cf95178459ee9eaf77cb1">Large Scale Unconstrained Open Set Face Database</a>), the researchers disclosed their funding sources as ODNI (United States Office of Director of National Intelligence), IARPA (Intelligence Advance Research Projects Activity), ONR MURI (Office of Naval Research and The Department of Defense Multidisciplinary University Research Initiative), Army SBIR (Small Business Innovation Research), SOCOM SBIR (Special Operations Command and Small Business Innovation Research), and the National Science Foundation. Further, UCCS's VAST site explicity <a href="https://vast.uccs.edu/project/iarpa-janus/">states</a> they are part of the <a href="https://www.iarpa.gov/index.php/research-programs/janus">IARPA Janus</a>, a face recognition project developed to serve the needs of national intelligence interests.</p>
</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/uccs_map_aerial.jpg' alt=' Location on campus where students were unknowingly photographed with a telephoto lens to be used for defense and intelligence agency funded research on face recognition. Image: Google Maps'><div class='caption'> Location on campus where students were unknowingly photographed with a telephoto lens to be used for defense and intelligence agency funded research on face recognition. Image: Google Maps</div></div></section><section><p>The UCCS dataset includes the highest resolution images of any publicly available face recognition dataset discovered so far (18MP) and was, as of 2018, the "largest surveillance FR benchmark in the public domain."<a class="footnote_shim" name="[^surv_face_qmul]_1"> </a><a href="#[^surv_face_qmul]" class="footnote" title="Footnote 3">3</a> To create the dataset, the researchers used a Canon 7D digital camera fitted with a Sigma 800mm telephoto lens and photographed students from a distance of 150&ndash;200m through their office window. Photos were taken during the morning and afternoon while students were walking to and from classes. According to an analysis of the EXIF data embedded in the photos, nearly half of the 16,149 photos were taken on Tuesdays. The most popular time was during lunch break. All of the photos were taken during the spring semester in 2012 and 2013 but the dataset was not publicy released until 2016.</p>
diff --git a/site/public/datasets/vgg_face2/index.html b/site/public/datasets/vgg_face2/index.html
index 75d73824..3c2859a5 100644
--- a/site/public/datasets/vgg_face2/index.html
+++ b/site/public/datasets/vgg_face2/index.html
@@ -26,7 +26,8 @@
</header>
<div class="content content-">
- <section><div class='right-sidebar'><div class='meta'>
+ <section><h2>VGG Face 2</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2015</div>
</div><div class='meta'>
@@ -47,8 +48,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='https://purl.stanford.edu/sx925dc9385' target='_blank' rel='nofollow noopener'>stanford.edu</a></div>
- </div></div><h2>VGG Face 2</h2>
-<p>[ page under development ]</p>
+ </div></div><section><p>[ page under development ]</p>
</section><section>
<h3>Who used Brainwash Dataset?</h3>
diff --git a/site/public/datasets/viper/index.html b/site/public/datasets/viper/index.html
index 5b3ac35b..494c249b 100644
--- a/site/public/datasets/viper/index.html
+++ b/site/public/datasets/viper/index.html
@@ -27,7 +27,8 @@
<div class="content content-dataset">
<section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/viper/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">VIPeR</span> is a person re-identification dataset of images captured at UC Santa Cruz in 2007</span></div><div class='hero_subdesc'><span class='bgpad'>VIPeR contains 1,264 images and 632 persons on the UC Santa Cruz campus and is used to train person re-identification algorithms for surveillance
-</span></div></div></section><section><div class='right-sidebar'><div class='meta'>
+</span></div></div></section><section><h2>VIPeR Dataset</h2>
+</section><div class='right-sidebar'><div class='meta'>
<div class='gray'>Published</div>
<div>2007</div>
</div><div class='meta'>
@@ -45,8 +46,7 @@
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='https://vision.soe.ucsc.edu/node/178' target='_blank' rel='nofollow noopener'>ucsc.edu</a></div>
- </div></div><h2>VIPeR Dataset</h2>
-<p>[ page under development ]</p>
+ </div></div><section><p>[ page under development ]</p>
<p><em>VIPeR (Viewpoint Invariant Pedestrian Recognition)</em> is a dataset of pedestrian images captured at University of California Santa Cruz in 2007. Accoriding to the reserachers 2 "cameras were placed in different locations in an academic setting and subjects were notified of the presence of cameras, but were not coached or instructed in any way."</p>
<p>VIPeR is amongst the most widely used publicly available person re-identification datasets. In 2017 the VIPeR dataset was combined into a larger person re-identification created by the Chinese University of Hong Kong called PETA (PEdesTrian Attribute).</p>
</section><section>
diff --git a/site/public/datasets/youtube_celebrities/index.html b/site/public/datasets/youtube_celebrities/index.html
index 39670c19..9a6ae18e 100644
--- a/site/public/datasets/youtube_celebrities/index.html
+++ b/site/public/datasets/youtube_celebrities/index.html
@@ -26,8 +26,8 @@
</header>
<div class="content content-">
- <section><div class='right-sidebar'></div><h2>YouTube Celebrities</h2>
-<p>[ page under development ]</p>
+ <section><h2>YouTube Celebrities</h2>
+</section><div class='right-sidebar'></div><section><p>[ page under development ]</p>
</section><section>
<h3>Who used YouTube Celebrities?</h3>