Multiple View Tracking

From ISRWiki
Revision as of 14:42, 19 December 2011 by Dario (talk | contribs)
Jump to navigation Jump to search

PETS2009 Dataset S2: People Tracking, Sparse Crowd http://www.cvg.rdg.ac.uk/PETS2010/a.html#s2

  • 4 viewpoints, 8 cameras
  • Sparse Crowd
  • Can't find annotation (but manual annotation was made for evaluation)

MuHAVi: Multicamera Human Action Video Data http://dipersec.king.ac.uk/MuHAVi-MAS/

  • Images - 8 viewpoints, 8 cameras, 17 actions/video-sequences
  • Annotated (silhouettes) - 2 cameras, 2 actors, 2 actions/video-sequences

ViHASi: Virtual Human Action Silhouette Data http://dipersec.king.ac.uk/VIHASI/

  • 20 viewpoints, 9 virtual actors
  • NOT annotated (but straight foward with MotionBuilder, it's virtual!)

BEHAVE Interactions Test Case Scenarios http://groups.inf.ed.ac.uk/vision/BEHAVEDATA/INTERACTIONS/

  • 2 viewpoints (90º difference), 2 cameras
  • Annotated - Bounding box and label (VIPER style)
  • ~ 7 people (min 3, max 8 across sequences)

CAVIAR http://homepages.inf.ed.ac.uk/rbf/CAVIAR/

  • 2 viewpoints (90º difference), 2 cameras
  • Annotated - Bounding box and label

ETISEO http://www-sop.inria.fr/orion/ETISEO/download.htm#video_data username: umpmuser password: ahJaka4o

  • unknown - asked permission for access
  • supposedly fully annotated´

UMPM Benchmark http://www.projects.science.uu.nl/umpm/

  • unknown - asked permissions for access
  • fully anotated - markers of body pose/position