summaryrefslogtreecommitdiff
path: root/site/public
diff options
context:
space:
mode:
authorJules Laplace <julescarbon@gmail.com>2019-04-11 18:44:21 +0200
committerJules Laplace <julescarbon@gmail.com>2019-04-11 18:44:21 +0200
commit9b1e2709cbdb40eabb34d379df18e61c10e3737c (patch)
treeb04bae90e465ad524441bc745f9b849a245287e4 /site/public
parent3dedfa97961c1c4569ee30fc9dc039ee46f8b19d (diff)
add h3
Diffstat (limited to 'site/public')
-rw-r--r--site/public/datasets/50_people_one_question/index.html2
-rw-r--r--site/public/datasets/brainwash/index.html4
-rw-r--r--site/public/datasets/duke_mtmc/index.html8
-rw-r--r--site/public/datasets/hrt_transgender/index.html2
-rw-r--r--site/public/datasets/index.html36
-rw-r--r--site/public/datasets/lfw/index.html4
-rw-r--r--site/public/datasets/msceleb/index.html4
-rw-r--r--site/public/datasets/oxford_town_centre/index.html11
-rw-r--r--site/public/datasets/uccs/index.html6
9 files changed, 22 insertions, 55 deletions
diff --git a/site/public/datasets/50_people_one_question/index.html b/site/public/datasets/50_people_one_question/index.html
index b27fa3e5..3b33f530 100644
--- a/site/public/datasets/50_people_one_question/index.html
+++ b/site/public/datasets/50_people_one_question/index.html
@@ -35,7 +35,7 @@
<div>33 </div>
</div><div class='meta'>
<div class='gray'>Purpose</div>
- <div>Facial landmark estimation in the wild</div>
+ <div>Facial landmark estimation</div>
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.vision.caltech.edu/~dhall/projects/MergingPoseEstimates/' target='_blank' rel='nofollow noopener'>caltech.edu</a></div>
diff --git a/site/public/datasets/brainwash/index.html b/site/public/datasets/brainwash/index.html
index 10ee577c..bd59f573 100644
--- a/site/public/datasets/brainwash/index.html
+++ b/site/public/datasets/brainwash/index.html
@@ -120,11 +120,11 @@
<li>add ethics link to Stanford</li>
<li>add optout info</li>
</ul>
-</section><section><ul class="footnotes"><li><a name="[^readme]" class="footnote_shim"></a><span class="backlinks"><a href="#[^readme]_1">a</a></span><p>"readme.txt" <a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">https://exhibits.stanford.edu/data/catalog/sx925dc9385</a>.</p>
+</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^readme]" class="footnote_shim"></a><span class="backlinks"><a href="#[^readme]_1">a</a></span><p>"readme.txt" <a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">https://exhibits.stanford.edu/data/catalog/sx925dc9385</a>.</p>
</li><li><a name="[^end_to_end]" class="footnote_shim"></a><span class="backlinks"><a href="#[^end_to_end]_1">a</a></span><p>Stewart, Russel. Andriluka, Mykhaylo. "End-to-end people detection in crowded scenes". 2016.</p>
</li><li><a name="[^localized_region_context]" class="footnote_shim"></a><span class="backlinks"><a href="#[^localized_region_context]_1">a</a></span><p>Li, Y. and Dou, Y. and Liu, X. and Li, T. Localized Region Context and Object Feature Fusion for People Head Detection. ICIP16 Proceedings. 2016. Pages 594-598.</p>
</li><li><a name="[^replacement_algorithm]" class="footnote_shim"></a><span class="backlinks"><a href="#[^replacement_algorithm]_1">a</a></span><p>Zhao. X, Wang Y, Dou, Y. A Replacement Algorithm of Non-Maximum Suppression Base on Graph Clustering.</p>
-</li></ul></section>
+</li></ul></section></section>
</div>
<footer>
diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html
index 0d082c15..9bec47ed 100644
--- a/site/public/datasets/duke_mtmc/index.html
+++ b/site/public/datasets/duke_mtmc/index.html
@@ -35,10 +35,10 @@
<div>2,000,000 </div>
</div><div class='meta'>
<div class='gray'>Identities</div>
- <div>1,812 </div>
+ <div>2,700 </div>
</div><div class='meta'>
<div class='gray'>Purpose</div>
- <div>Person re-identification and multi-camera tracking</div>
+ <div>Person re-identification, multi-camera tracking</div>
</div><div class='meta'>
<div class='gray'>Created by</div>
<div>Computer Science Department, Duke University, Durham, US</div>
@@ -112,7 +112,7 @@
</section><section><h3>Notes</h3>
<p>The Duke MTMC dataset paper mentions 2,700 identities, but their ground truth file only lists annotations for 1,812</p>
-</section><section><ul class="footnotes"><li><a name="[^sensetime_qz]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime_qz]_1">a</a></span><p><a href="https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/">https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/</a></p>
+</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^sensetime_qz]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime_qz]_1">a</a></span><p><a href="https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/">https://qz.com/1248493/sensetime-the-billion-dollar-alibaba-backed-ai-company-thats-quietly-watching-everyone-in-china/</a></p>
</li><li><a name="[^sensenets_uyghurs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_uyghurs]_1">a</a></span><p><a href="https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/">https://foreignpolicy.com/2019/03/19/962492-orwell-china-socialcredit-surveillance/</a></p>
</li><li><a name="[^sensenets_sensetime]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensenets_sensetime]_1">a</a></span><p>"Attention-Aware Compositional Network for Person Re-identification". 2018. <a href="https://www.semanticscholar.org/paper/Attention-Aware-Compositional-Network-for-Person-Xu-Zhao/14ce502bc19b225466126b256511f9c05cadcb6e">Source</a></p>
</li><li><a name="[^sensetime1]" class="footnote_shim"></a><span class="backlinks"><a href="#[^sensetime1]_1">a</a></span><p>"End-to-End Deep Kronecker-Product Matching for Person Re-identification". 2018. <a href="https://www.semanticscholar.org/paper/End-to-End-Deep-Kronecker-Product-Matching-for-Shen-Xiao/947954cafdefd471b75da8c3bb4c21b9e6d57838">source</a></p>
@@ -122,7 +122,7 @@
</li><li><a name="[^cn_defense2]" class="footnote_shim"></a><span class="backlinks"><a href="#[^cn_defense2]_1">a</a></span><p>"Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based Recurrent Attention Networks". 2018. <a href="https://www.semanticscholar.org/paper/Unsupervised-Multi-Object-Detection-for-Video-Using-He-He/59f357015054bab43fb8cbfd3f3dbf17b1d1f881">Source</a></p>
</li><li><a name="[^iarpa_ibm]" class="footnote_shim"></a><span class="backlinks"><a href="#[^iarpa_ibm]_1">a</a></span><p>"Horizontal Pyramid Matching for Person Re-identification". 2019. <a href="https://www.semanticscholar.org/paper/Horizontal-Pyramid-Matching-for-Person-Fu-Wei/c2a5f27d97744bc1f96d7e1074395749e3c59bc8">Source</a></p>
</li><li><a name="[^us_dhs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^us_dhs]_1">a</a></span><p>"Re-Identification with Consistent Attentive Siamese Networks". 2018. <a href="https://www.semanticscholar.org/paper/Re-Identification-with-Consistent-Attentive-Siamese-Zheng-Karanam/24d6d3adf2176516ef0de2e943ce2084e27c4f94">Source</a></p>
-</li></ul></section>
+</li></ul></section></section>
</div>
<footer>
diff --git a/site/public/datasets/hrt_transgender/index.html b/site/public/datasets/hrt_transgender/index.html
index 099dea4e..486b9122 100644
--- a/site/public/datasets/hrt_transgender/index.html
+++ b/site/public/datasets/hrt_transgender/index.html
@@ -38,7 +38,7 @@
<div>38 </div>
</div><div class='meta'>
<div class='gray'>Purpose</div>
- <div>gender transition and facial recognition</div>
+ <div>Face recognition, gender transition biometrics</div>
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.faceaginggroup.com/hrt-transgender/' target='_blank' rel='nofollow noopener'>faceaginggroup.com</a></div>
diff --git a/site/public/datasets/index.html b/site/public/datasets/index.html
index 6f87ff68..b01c1ac1 100644
--- a/site/public/datasets/index.html
+++ b/site/public/datasets/index.html
@@ -61,30 +61,6 @@
</div>
</a>
- <a href="/datasets/market_1501/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/market_1501/assets/index.jpg)">
- <div class="dataset">
- <span class='title'>Market-1501</span>
- <div class='fields'>
- <div class='year visible'><span>2015</span></div>
- <div class='purpose'><span>Person re-identification</span></div>
- <div class='images'><span>32,668 images</span></div>
- <div class='identities'><span>1,501 </span></div>
- </div>
- </div>
- </a>
-
- <a href="/datasets/msceleb/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/msceleb/assets/index.jpg)">
- <div class="dataset">
- <span class='title'>Microsoft Celeb</span>
- <div class='fields'>
- <div class='year visible'><span>2016</span></div>
- <div class='purpose'><span>Large-scale face recognition</span></div>
- <div class='images'><span>1,000,000 images</span></div>
- <div class='identities'><span>100,000 </span></div>
- </div>
- </div>
- </a>
-
<a href="/datasets/oxford_town_centre/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/index.jpg)">
<div class="dataset">
<span class='title'>Oxford Town Centre</span>
@@ -109,18 +85,6 @@
</div>
</a>
- <a href="/datasets/viper/" style="background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/viper/assets/index.jpg)">
- <div class="dataset">
- <span class='title'>VIPeR</span>
- <div class='fields'>
- <div class='year visible'><span>2007</span></div>
- <div class='purpose'><span>Person re-identification</span></div>
- <div class='images'><span>1,264 images</span></div>
- <div class='identities'><span>632 </span></div>
- </div>
- </div>
- </a>
-
</div>
</section>
diff --git a/site/public/datasets/lfw/index.html b/site/public/datasets/lfw/index.html
index cb487913..60a6bf0e 100644
--- a/site/public/datasets/lfw/index.html
+++ b/site/public/datasets/lfw/index.html
@@ -141,10 +141,10 @@
<li>The word "future" appears 71 times</li>
<li>* denotes partial funding for related research</li>
</ul>
-</section><section><ul class="footnotes"><li><a name="[^lfw_www]" class="footnote_shim"></a><span class="backlinks"><a href="#[^lfw_www]_1">a</a><a href="#[^lfw_www]_2">b</a></span><p><a href="http://vis-www.cs.umass.edu/lfw/results.html">http://vis-www.cs.umass.edu/lfw/results.html</a></p>
+</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^lfw_www]" class="footnote_shim"></a><span class="backlinks"><a href="#[^lfw_www]_1">a</a><a href="#[^lfw_www]_2">b</a></span><p><a href="http://vis-www.cs.umass.edu/lfw/results.html">http://vis-www.cs.umass.edu/lfw/results.html</a></p>
</li><li><a name="[^lfw_baidu]" class="footnote_shim"></a><span class="backlinks"><a href="#[^lfw_baidu]_1">a</a></span><p>Jingtuo Liu, Yafeng Deng, Tao Bai, Zhengping Wei, Chang Huang. Targeting Ultimate Accuracy: Face Recognition via Deep Embedding. <a href="https://arxiv.org/abs/1506.07310">https://arxiv.org/abs/1506.07310</a></p>
</li><li><a name="[^lfw_pingan]" class="footnote_shim"></a><span class="backlinks"><a href="#[^lfw_pingan]_1">a</a><a href="#[^lfw_pingan]_2">b</a><a href="#[^lfw_pingan]_3">c</a></span><p>Lee, Justin. "PING AN Tech facial recognition receives high score in latest LFW test results". BiometricUpdate.com. Feb 13, 2017. <a href="https://www.biometricupdate.com/201702/ping-an-tech-facial-recognition-receives-high-score-in-latest-lfw-test-results">https://www.biometricupdate.com/201702/ping-an-tech-facial-recognition-receives-high-score-in-latest-lfw-test-results</a></p>
-</li></ul></section>
+</li></ul></section></section>
</div>
<footer>
diff --git a/site/public/datasets/msceleb/index.html b/site/public/datasets/msceleb/index.html
index fd64189c..cf3a654f 100644
--- a/site/public/datasets/msceleb/index.html
+++ b/site/public/datasets/msceleb/index.html
@@ -114,10 +114,10 @@
<ul>
<li>The dataset author spoke about his research at the CVPR conference in 2016 <a href="https://www.youtube.com/watch?v=Nl2fBKxwusQ">https://www.youtube.com/watch?v=Nl2fBKxwusQ</a></li>
</ul>
-</section><section><ul class="footnotes"><li><a name="[^readme]" class="footnote_shim"></a><span class="backlinks"></span><p>"readme.txt" <a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">https://exhibits.stanford.edu/data/catalog/sx925dc9385</a>.</p>
+</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^readme]" class="footnote_shim"></a><span class="backlinks"></span><p>"readme.txt" <a href="https://exhibits.stanford.edu/data/catalog/sx925dc9385">https://exhibits.stanford.edu/data/catalog/sx925dc9385</a>.</p>
</li><li><a name="[^localized_region_context]" class="footnote_shim"></a><span class="backlinks"></span><p>Li, Y. and Dou, Y. and Liu, X. and Li, T. Localized Region Context and Object Feature Fusion for People Head Detection. ICIP16 Proceedings. 2016. Pages 594-598.</p>
</li><li><a name="[^replacement_algorithm]" class="footnote_shim"></a><span class="backlinks"></span><p>Zhao. X, Wang Y, Dou, Y. A Replacement Algorithm of Non-Maximum Suppression Base on Graph Clustering.</p>
-</li></ul></section>
+</li></ul></section></section>
</div>
<footer>
diff --git a/site/public/datasets/oxford_town_centre/index.html b/site/public/datasets/oxford_town_centre/index.html
index 5379682c..63dc52d4 100644
--- a/site/public/datasets/oxford_town_centre/index.html
+++ b/site/public/datasets/oxford_town_centre/index.html
@@ -29,11 +29,14 @@
<section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/oxford_town_centre/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'>Oxford Town Centre is a dataset of surveillance camera footage from Cornmarket St Oxford, England</span></div><div class='hero_subdesc'><span class='bgpad'>The Oxford Town Centre dataset includes approximately 2,200 identities and is used for research and development of face recognition systems
</span></div></div></section><section><div class='left-sidebar'><div class='meta'>
<div class='gray'>Published</div>
- <div>2011</div>
+ <div>2009</div>
</div><div class='meta'>
<div class='gray'>Videos</div>
<div>1 </div>
</div><div class='meta'>
+ <div class='gray'>Identities</div>
+ <div>2,200 </div>
+ </div><div class='meta'>
<div class='gray'>Purpose</div>
<div>Person detection, gaze estimation</div>
</div><div class='meta'>
@@ -41,7 +44,7 @@
<div>EU FP6 Hermes project and Oxford Risk </div>
</div><div class='meta'>
<div class='gray'>Download Size</div>
- <div>0.118 GB</div>
+ <div>0.147 GB</div>
</div><div class='meta'>
<div class='gray'>Website</div>
<div><a href='http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/2009bbenfold_headpose/project.html' target='_blank' rel='nofollow noopener'>ox.ac.uk</a></div>
@@ -122,9 +125,9 @@
<li><a href="https://www.youtube.com/watch?v=ErLtfUAJA8U">towncentre</a></li>
<li><a href="https://www.youtube.com/watch?v=LwMOmqvhnoc">VTD - towncenter.avi</a></li>
</ul>
-</section><section><ul class="footnotes"><li><a name="[^ben_benfold_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^ben_benfold_orig]_1">a</a></span><p>Benfold, Ben and Reid, Ian. "Stable Multi-Target Tracking in Real-Time Surveillance Video". CVPR 2011. Pages 3457-3464.</p>
+</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^ben_benfold_orig]" class="footnote_shim"></a><span class="backlinks"><a href="#[^ben_benfold_orig]_1">a</a></span><p>Benfold, Ben and Reid, Ian. "Stable Multi-Target Tracking in Real-Time Surveillance Video". CVPR 2011. Pages 3457-3464.</p>
</li><li><a name="[^guiding_surveillance]" class="footnote_shim"></a><span class="backlinks"><a href="#[^guiding_surveillance]_1">a</a></span><p>"Guiding Visual Surveillance by Tracking Human Attention". 2009.</p>
-</li></ul></section>
+</li></ul></section></section>
</div>
<footer>
diff --git a/site/public/datasets/uccs/index.html b/site/public/datasets/uccs/index.html
index c9faac68..794e3e69 100644
--- a/site/public/datasets/uccs/index.html
+++ b/site/public/datasets/uccs/index.html
@@ -29,7 +29,7 @@
<section class='intro_section' style='background-image: url(https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/uccs/assets/background.jpg)'><div class='inner'><div class='hero_desc'><span class='bgpad'><span class="dataset-name">UnConstrained College Students</span> is a dataset of long-range surveillance photos of students at University of Colorado in Colorado Springs</span></div><div class='hero_subdesc'><span class='bgpad'>The UnConstrained College Students dataset includes 16,149 images and 1,732 identities of subjects on University of Colorado Colorado Springs campus and is used for making face recognition and face detection algorithms
</span></div></div></section><section><div class='left-sidebar'><div class='meta'>
<div class='gray'>Published</div>
- <div>2018</div>
+ <div>2016</div>
</div><div class='meta'>
<div class='gray'>Images</div>
<div>16,149 </div>
@@ -231,9 +231,9 @@
<li>adding more verified locations to map and charts</li>
<li>add EXIF file to CDN</li>
</ul>
-</section><section><ul class="footnotes"><li><a name="[^funding_sb]" class="footnote_shim"></a><span class="backlinks"><a href="#[^funding_sb]_1">a</a></span><p>Sapkota, Archana and Boult, Terrance. "Large Scale Unconstrained Open Set Face Database." 2013.</p>
+</section><section><h3>References</h3><section><ul class="footnotes"><li><a name="[^funding_sb]" class="footnote_shim"></a><span class="backlinks"><a href="#[^funding_sb]_1">a</a></span><p>Sapkota, Archana and Boult, Terrance. "Large Scale Unconstrained Open Set Face Database." 2013.</p>
</li><li><a name="[^funding_uccs]" class="footnote_shim"></a><span class="backlinks"><a href="#[^funding_uccs]_1">a</a><a href="#[^funding_uccs]_2">b</a></span><p>Günther, M. et. al. "Unconstrained Face Detection and Open-Set Face Recognition Challenge," 2018. Arxiv 1708.02337v3.</p>
-</li></ul></section>
+</li></ul></section></section>
</div>
<footer>