Multiple View Tracking: Difference between revisions

From ISRWiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(12 intermediate revisions by 2 users not shown)
Line 1: Line 1:
PETS2009 Dataset S2: People Tracking, Sparse Crowd http://www.cvg.rdg.ac.uk/PETS2010/a.html#s2
'''CAVIAR''' http://homepages.inf.ed.ac.uk/rbf/CAVIAR/
* 4 viewpoints, 8 cameras
* 2 viewpoints (90º difference), 2 cameras
* Sparse Crowd
* Annotated - Bounding box and label
 
'''VIPeR''' http://vision.soe.ucsc.edu/node/178
* 2 viewpoints
* "Annoteted" - already cropped ped images
* 632 people, 2 images each
 
'''LRS''' http://lrs.icg.tugraz.at/download.php
* 2 viewpoints
* "Annotated" - already cropped 128x64 images
* 200 people, ~200 images/person/camera (min 5)
* +900 1-view people (for gallery)
 
'''3DPeS''' http://www.openvisor.org/3dpes.asp http://imagelab.ing.unimore.it/3DPeS/snapshots.rar
* 8 viewpoints
* "Annotated" - already cropped 606 pedestrian images
* 200 people, 606 images total (min 2 images/person)
 
--
 
'''ETHZ''' http://www.vision.ee.ethz.ch/~aess/dataset/
* ~1 viewpoint (1 moving camera)
* Annotated - (we already cropped out the pedestrians)
* 146 people, ~8500 images


MuHAVi: Multicamera Human Action Video Data http://dipersec.king.ac.uk/MuHAVi-MAS/
'''MuHAVi''': Multicamera Human Action Video Data http://dipersec.king.ac.uk/MuHAVi-MAS/
* Images - 8 viewpoints, 8 cameras, 17 actions/video-sequences
* Images - 8 viewpoints, 8 cameras, 17 actions/video-sequences
* Annotated (silhouettes) - 2 cameras, 2 actors, 2 actions/video-sequences
* Annotated (silhouettes) - 2 cameras, 2 actors, 2 actions/video-sequences


ViHASi: Virtual Human Action Silhouette Data http://dipersec.king.ac.uk/VIHASI/
'''ViHASi''': Virtual Human Action Silhouette Data http://dipersec.king.ac.uk/VIHASI/ username: VIHASI passw: virtual$world
* 20 viewpoints, 9 virtual actors  
* 20 viewpoints, 9 virtual actors  
* NOT annotated (but straight foward with MotionBuilder, it's virtual!)
* NOT annotated (but straight foward with MotionBuilder, it's virtual!)


BEHAVE Interactions Test Case Scenarios http://groups.inf.ed.ac.uk/vision/BEHAVEDATA/INTERACTIONS/
'''BEHAVE''' Interactions Test Case Scenarios http://groups.inf.ed.ac.uk/vision/BEHAVEDATA/INTERACTIONS/
* 2 viewpoints (90º difference), 2 cameras
* 2 viewpoints (90º difference), 2 cameras
* Annotated - Bounding box and label (VIPER style)
* Annotated - Bounding box and label (VIPER style)
* ~ 7 people (min 3, max 8 across sequences)
* ~ 7 people (min 3, max 8 across sequences)


CAVIAR http://homepages.inf.ed.ac.uk/rbf/CAVIAR/
PETS2009 Dataset S2: People Tracking, Sparse Crowd http://www.cvg.rdg.ac.uk/PETS2010/a.html#s2
* 2 viewpoints (90º difference), 2 cameras
* 4 viewpoints, 8 cameras
* Annotated - Bounding box and label
* Sparse Crowd
* Can't find annotation (but manual annotation was made for evaluation)


ETISEO http://www-sop.inria.fr/orion/ETISEO/download.htm#video_data
* unknown - asked permission for access - '''Got permission, if anyone wants, ask me - Dario'''
* supposedly fully annotated´


UMPM Benchmark http://www.projects.science.uu.nl/umpm/ username: umpmuser password: ahJaka4o
* unknown - asked permissions for access
* fully anotated - markers of body pose/position


[[Category:Vislab]]
[[Category:Vislab]]

Latest revision as of 11:21, 7 March 2012

CAVIAR http://homepages.inf.ed.ac.uk/rbf/CAVIAR/

  • 2 viewpoints (90º difference), 2 cameras
  • Annotated - Bounding box and label

VIPeR http://vision.soe.ucsc.edu/node/178

  • 2 viewpoints
  • "Annoteted" - already cropped ped images
  • 632 people, 2 images each

LRS http://lrs.icg.tugraz.at/download.php

  • 2 viewpoints
  • "Annotated" - already cropped 128x64 images
  • 200 people, ~200 images/person/camera (min 5)
  • +900 1-view people (for gallery)

3DPeS http://www.openvisor.org/3dpes.asp http://imagelab.ing.unimore.it/3DPeS/snapshots.rar

  • 8 viewpoints
  • "Annotated" - already cropped 606 pedestrian images
  • 200 people, 606 images total (min 2 images/person)

--

ETHZ http://www.vision.ee.ethz.ch/~aess/dataset/

  • ~1 viewpoint (1 moving camera)
  • Annotated - (we already cropped out the pedestrians)
  • 146 people, ~8500 images

MuHAVi: Multicamera Human Action Video Data http://dipersec.king.ac.uk/MuHAVi-MAS/

  • Images - 8 viewpoints, 8 cameras, 17 actions/video-sequences
  • Annotated (silhouettes) - 2 cameras, 2 actors, 2 actions/video-sequences

ViHASi: Virtual Human Action Silhouette Data http://dipersec.king.ac.uk/VIHASI/ username: VIHASI passw: virtual$world

  • 20 viewpoints, 9 virtual actors
  • NOT annotated (but straight foward with MotionBuilder, it's virtual!)

BEHAVE Interactions Test Case Scenarios http://groups.inf.ed.ac.uk/vision/BEHAVEDATA/INTERACTIONS/

  • 2 viewpoints (90º difference), 2 cameras
  • Annotated - Bounding box and label (VIPER style)
  • ~ 7 people (min 3, max 8 across sequences)

PETS2009 Dataset S2: People Tracking, Sparse Crowd http://www.cvg.rdg.ac.uk/PETS2010/a.html#s2

  • 4 viewpoints, 8 cameras
  • Sparse Crowd
  • Can't find annotation (but manual annotation was made for evaluation)

ETISEO http://www-sop.inria.fr/orion/ETISEO/download.htm#video_data

  • unknown - asked permission for access - Got permission, if anyone wants, ask me - Dario
  • supposedly fully annotated´

UMPM Benchmark http://www.projects.science.uu.nl/umpm/ username: umpmuser password: ahJaka4o

  • unknown - asked permissions for access
  • fully anotated - markers of body pose/position