diff options
| author | adamhrv <adam@ahprojects.com> | 2019-04-05 13:17:05 +0200 |
|---|---|---|
| committer | adamhrv <adam@ahprojects.com> | 2019-04-05 13:17:05 +0200 |
| commit | b73e233acec5ad6c3aca7475288482f366f7a31f (patch) | |
| tree | 5c90491439e84905b52eebb0bb0ced95290112e9 /site/public/datasets/duke_mtmc/index.html | |
| parent | 2137d7183f48d426de2582b4786bb16c2ae6a82f (diff) | |
never say final, update uccs
Diffstat (limited to 'site/public/datasets/duke_mtmc/index.html')
| -rw-r--r-- | site/public/datasets/duke_mtmc/index.html | 159 |
1 files changed, 45 insertions, 114 deletions
diff --git a/site/public/datasets/duke_mtmc/index.html b/site/public/datasets/duke_mtmc/index.html index 37de48ad..62e5d836 100644 --- a/site/public/datasets/duke_mtmc/index.html +++ b/site/public/datasets/duke_mtmc/index.html @@ -17,7 +17,7 @@ <a class='slogan' href="/"> <div class='logo'></div> <div class='site_name'>MegaPixels</div> - <div class='splash'>Duke MTMC</div> + <div class='splash'>Duke MTMC Dataset</div> </a> <div class='links'> <a href="/datasets/">Datasets</a> @@ -45,35 +45,41 @@ </div><div class='meta'> <div class='gray'>Website</div> <div><a href='http://vision.cs.duke.edu/DukeMTMC/' target='_blank' rel='nofollow noopener'>duke.edu</a></div> - </div><div class='meta'><div><div class='gray'>Created</div><div>2014</div></div><div><div class='gray'>Identities</div><div>Over 2,700</div></div><div><div class='gray'>Used for</div><div>Face recognition, person re-identification</div></div><div><div class='gray'>Created by</div><div>Computer Science Department, Duke University, Durham, US</div></div><div><div class='gray'>Website</div><div><a href="http://vision.cs.duke.edu/DukeMTMC/">duke.edu</a></div></div></div></div><h2>Duke Multi-Target, Multi-Camera Tracking Dataset (Duke MTMC)</h2> -<p>[ PAGE UNDER DEVELOPMENT ]</p> -<p>Duke MTMC is a dataset of video recorded on Duke University campus during for the purpose of training, evaluating, and improving <em>multi-target multi-camera tracking</em>. The videos were recorded during February and March 2014 and cinclude</p> -<p>Includes a total of 888.8 minutes of video (ind. verified)</p> -<p>"We make available a new data set that has more than 2 million frames and more than 2,700 identities. It consists of 8×85 minutes of 1080p video recorded at 60 frames per second from 8 static cameras deployed on the Duke University campus during periods between lectures, when pedestrian traffic is heavy."</p> -<p>The dataset includes approximately 2,000 annotated identities appearing in 85 hours of video from 8 cameras located throughout Duke University's campus.</p> -</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_cam5_average_comp.jpg' alt=' Duke MTMC pixel-averaged image of camera #5 is shown with the bounding boxes for each student drawn in white. (c) Adam Harvey'><div class='caption'> Duke MTMC pixel-averaged image of camera #5 is shown with the bounding boxes for each student drawn in white. (c) Adam Harvey</div></div></section><section><p>According to the dataset authors,</p> + </div></div><h2>Duke MTMC</h2> +<p>[ page under development ]</p> +<p>The Duke Multi-Target, Multi-Camera Tracking Dataset (MTMC) is a dataset of video recorded on Duke University campus during for the purpose of training, evaluating, and improving <em>multi-target multi-camera tracking</em> for surveillance. The dataset includes over 14 hours of 1080p video from 8 cameras positioned around Duke's campus during February and March 2014. Over 2,700 unique people are included in the dataset, which has become of the most widely used person re-identification image datasets.</p> +<p>The 8 cameras deployed on Duke's campus were specifically setup to capture students "during periods between lectures, when pedestrian traffic is heavy".</p> </section><section> + <h3>Who used Duke MTMC Dataset?</h3> + + <p> + This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. + </p> + + </section> + +<section class="applet_container"> +<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> +</div> --> + <div class="applet" data-payload="{"command": "chart"}"></div> +</section> + +<section class="applet_container"> + <div class="applet" data-payload="{"command": "piechart"}"></div> +</section> + +<section> <h3>Biometric Trade Routes</h3> -<!-- - <div class="map-sidebar right-sidebar"> - <h3>Legend</h3> - <ul> - <li><span style="color: #f2f293">■</span> Industry</li> - <li><span style="color: #f30000">■</span> Academic</li> - <li><span style="color: #3264f6">■</span> Government</li> - </ul> - </div> - --> + <p> - To help understand how Duke MTMC Dataset has been used around the world for commercial, military and academic research; publicly available research citing Duke Multi-Target, Multi-Camera Tracking Project is collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal reserach projects at that location. + To help understand how Duke MTMC Dataset has been used around the world by commercial, military, and academic organizations; existing publicly available research citing Duke Multi-Target, Multi-Camera Tracking Project was collected, verified, and geocoded to show the biometric trade routes of people appearing in the images. Click on the markers to reveal research projects at that location. </p> </section> <section class="applet_container fullwidth"> <div class="applet" data-payload="{"command": "map"}"></div> - </section> <div class="caption"> @@ -81,30 +87,19 @@ <li class="edu">Academic</li> <li class="com">Commercial</li> <li class="gov">Military / Government</li> - <li class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</li> </ul> + <div class="source">Citation data is collected using <a href="https://semanticscholar.org" target="_blank">SemanticScholar.org</a> then dataset usage verified and geolocated.</div > </div> -<!-- <section> - <p class='subp'> - [section under development] Duke MTMC Dataset ... Standardized paragraph of text about the map. Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. - </p> -</section> - --><section> - <h3>Who used Duke MTMC Dataset?</h3> +<section class="applet_container"> + + <h3>Dataset Citations</h3> <p> - This bar chart presents a ranking of the top countries where dataset citations originated. Mouse over individual columns to see yearly totals. These charts show at most the top 10 countries. + The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. </p> - - </section> -<section class="applet_container"> -<!-- <div style="position: absolute;top: 0px;right: -55px;width: 180px;font-size: 14px;">Labeled Faces in the Wild Dataset<br><span class="numc" style="font-size: 11px;">20 citations</span> -</div> --> - <div class="applet" data-payload="{"command": "chart"}"></div> -</section><section class="applet_container"> - <div class="applet" data-payload="{"command": "piechart"}"></div> + <div class="applet" data-payload="{"command": "citations"}"></div> </section><section> <div class="hr-wave-holder"> @@ -112,93 +107,29 @@ <div class="hr-wave-line hr-wave-line2"></div> </div> - <h3>Supplementary Information</h3> + <h2>Supplementary Information</h2> -</section><section class="applet_container"> - - <h3>Dataset Citations</h3> - <p> - The dataset citations used in the visualizations were collected from <a href="https://www.semanticscholar.org">Semantic Scholar</a>, a website which aggregates and indexes research papers. Each citation was geocoded using names of institutions found in the PDF front matter, or as listed on other resources. These papers have been manually verified to show that researchers downloaded and used the dataset to train or test machine learning algorithms. - </p> - - <div class="applet" data-payload="{"command": "citations"}"></div> -</section><section><h2>Research Notes</h2> +</section><section><h4>Data Visualizations</h4> +</section><section class='images'><div class='image'><img src='https://nyc3.digitaloceanspaces.com/megapixels/v1/datasets/duke_mtmc/assets/duke_mtmc_cam5_average_comp.jpg' alt=' Duke MTMC pixel-averaged image of camera #5 is shown with the bounding boxes for each student drawn in white. (c) Adam Harvey'><div class='caption'> Duke MTMC pixel-averaged image of camera #5 is shown with the bounding boxes for each student drawn in white. (c) Adam Harvey</div></div></section><section><h3>TODO</h3> <ul> -<li>"We make available a new data set that has more than 2 million frames and more than 2,700 identities. It consists of 8×85 minutes of 1080p video recorded at 60 frames per second from 8 static cameras deployed on the Duke University campus during periods between lectures, when pedestrian traffic is heavy." - 27a2fad58dd8727e280f97036e0d2bc55ef5424c</li> -<li>"This work was supported in part by the EPSRC Programme Grant (FACER2VM) EP/N007743/1, EPSRC/dstl/MURI project EP/R018456/1, the National Natural Science Foundation of China (61373055, 61672265, 61602390, 61532009, 61571313), Chinese Ministry of Education (Z2015101), Science and Technology Department of Sichuan Province (2017RZ0009 and 2017FZ0029), Education Department of Sichuan Province (15ZB0130), the Open Research Fund from Province Key Laboratory of Xihua University (szjj2015-056) and the NVIDIA GPU Grant Program." - ec9c20ed6cce15e9b63ac96bb5a6d55e69661e0b</li> -<li>"DukeMTMC aims to accelerate advances in multi-target multi-camera tracking. It provides a tracking system that works within and across cameras, a new large scale HD video data set recorded by 8 synchronized cameras with more than 7,000 single camera trajectories and over 2,000 unique identities, and a new performance evaluation method that measures how often a system is correct about who is where"</li> -<li><p>DukeMTMC is a new, manually annotated, calibrated, multi-camera data set recorded outdoors on the Duke University campus with 8 synchronized cameras. It consists of:</p> -<p>8 static cameras x 85 minutes of 1080p 60 fps video - More than 2,000,000 manually annotated frames - More than 2,000 identities - Manual annotation by 5 people over 1 year - More identities than all existing MTMC datasets combined - Unconstrained paths, diverse appearance</p> -</li> -<li>DukeMTMC Project -Ergys Ristani Ergys Ristani Ergys Ristani Ergys Ristani Ergys Ristani</li> +<li>change to heatmap overlay of each location</li> +<li>make fancy viz of foot trails with bbox and blurred persons</li> +<li>expand story</li> +<li>add google street view images of each camera location?</li> +<li>add actual head detections to header image with faces blurred</li> +<li>add 4 diverse example images with faces blurred</li> +<li>add map location of the brainwash cafe</li> </ul> -<p>People involved: -Ergys Ristani, Francesco Solera, Roger S. Zou, Rita Cucchiara, Carlo Tomasi.</p> -<p>Navigation:</p> -<p>Data Set - Downloads - Downloads - Dataset Extensions - Performance Measures - Tracking Systems - Publications - How to Cite - Contact</p> -<p>Welcome to the Duke Multi-Target, Multi-Camera Tracking Project.</p> -<p>DukeMTMC aims to accelerate advances in multi-target multi-camera tracking. It provides a tracking system that works within and across cameras, a new large scale HD video data set recorded by 8 synchronized cameras with more than 7,000 single camera trajectories and over 2,000 unique identities, and a new performance evaluation method that measures how often a system is correct about who is where. -DukeMTMC Data Set -Snapshot from the DukeMTMC data set.</p> -<p>DukeMTMC is a new, manually annotated, calibrated, multi-camera data set recorded outdoors on the Duke University campus with 8 synchronized cameras. It consists of:</p> -<p>8 static cameras x 85 minutes of 1080p 60 fps video - More than 2,000,000 manually annotated frames - More than 2,000 identities - Manual annotation by 5 people over 1 year - More identities than all existing MTMC datasets combined - Unconstrained paths, diverse appearance</p> -<p>News</p> -<p>05 Feb 2019 We are organizing the 2nd Workshop on MTMCT and ReID at CVPR 2019 - 25 Jul 2018: The code for DeepCC is available on github - 28 Feb 2018: OpenPose detections now available for download - 19 Feb 2018: Our DeepCC tracker has been accepted to CVPR 2018 - 04 Oct 2017: A new blog post describes ID measures of performance - 26 Jul 2017: Slides from the BMTT 2017 workshop are now available - 09 Dec 2016: DukeMTMC is now hosted on MOTChallenge</p> -<p>DukeMTMC Downloads</p> -<p>DukeMTMC dataset (tracking)</p> -<p>Dataset Extensions</p> -<p>Below is a list of dataset extensions provided by the community:</p> -<p>DukeMTMC-VideoReID (download) - DukeMTMC-reID (download) - DukeMTMC4REID - DukeMTMC-attribute</p> -<p>If you use or extend DukeMTMC, please refer to the license terms. -DukeMTMCT Benchmark</p> -<p>DukeMTMCT is a tracking benchmark hosted on motchallenge.net. Click here for the up-to-date rankings. Here you will find the official motchallenge-devkit used for evaluation by MOTChallenge. For detailed instructions how to submit on motchallenge you can refer to this link.</p> -<p>Trackers are ranked using our identity-based measures which compute how often the system is correct about who is where, regardless of how often a target is lost and reacquired. Our measures are useful in applications such as security, surveillance or sports. This short post describes our measures with illustrations, while for details you can refer to the original paper. -Tracking Systems</p> -<p>We provide code for the following tracking systems which are all based on Correlation Clustering optimization:</p> -<p>DeepCC for single- and multi-camera tracking [1] - Single-Camera Tracker (demo video) [2] - Multi-Camera Tracker (demo video, failure cases) [2] - People-Groups Tracker [3] - Original Single-Camera Tracker [4]</p> </section> </div> <footer> <div> <a href="/">MegaPixels.cc</a> - <a href="/about/disclaimer/">Disclaimer</a> - <a href="/about/terms/">Terms of Use</a> - <a href="/about/privacy/">Privacy</a> + <a href="/datasets/">Datasets</a> <a href="/about/">About</a> - <a href="/about/team/">Team</a> + <a href="/about/press/">Press</a> + <a href="/about/legal/">Legal and Privacy</a> </div> <div> MegaPixels ©2017-19 Adam R. Harvey / |
