Multiple View Tracking: Difference between revisions

From ISRWiki
Jump to navigation Jump to search
(added 3DPeS)
No edit summary
 
(4 intermediate revisions by one other user not shown)
Line 1: Line 1:
'''MuHAVi''': Multicamera Human Action Video Data http://dipersec.king.ac.uk/MuHAVi-MAS/
* Images - 8 viewpoints, 8 cameras, 17 actions/video-sequences
* Annotated (silhouettes) - 2 cameras, 2 actors, 2 actions/video-sequences
'''ViHASi''': Virtual Human Action Silhouette Data http://dipersec.king.ac.uk/VIHASI/ username: VIHASI passw: virtual$world
* 20 viewpoints, 9 virtual actors
* NOT annotated (but straight foward with MotionBuilder, it's virtual!)
'''BEHAVE''' Interactions Test Case Scenarios http://groups.inf.ed.ac.uk/vision/BEHAVEDATA/INTERACTIONS/
* 2 viewpoints (90º difference), 2 cameras
* Annotated - Bounding box and label (VIPER style)
* ~ 7 people (min 3, max 8 across sequences)
'''CAVIAR''' http://homepages.inf.ed.ac.uk/rbf/CAVIAR/
'''CAVIAR''' http://homepages.inf.ed.ac.uk/rbf/CAVIAR/
* 2 viewpoints (90º difference), 2 cameras
* 2 viewpoints (90º difference), 2 cameras
* Annotated - Bounding box and label
* Annotated - Bounding box and label
'''VIPeR''' http://vision.soe.ucsc.edu/node/178
* 2 viewpoints
* "Annoteted" - already cropped ped images
* 632 people, 2 images each


'''LRS''' http://lrs.icg.tugraz.at/download.php
'''LRS''' http://lrs.icg.tugraz.at/download.php
Line 24: Line 16:
'''3DPeS''' http://www.openvisor.org/3dpes.asp http://imagelab.ing.unimore.it/3DPeS/snapshots.rar
'''3DPeS''' http://www.openvisor.org/3dpes.asp http://imagelab.ing.unimore.it/3DPeS/snapshots.rar
* 8 viewpoints
* 8 viewpoints
* "Annotated" - already cropped some pedestrian images
* "Annotated" - already cropped 606 pedestrian images
* 200 people, 606 images total (min 2 images/person)
* 200 people, 606 images total (min 2 images/person)
--
'''ETHZ''' http://www.vision.ee.ethz.ch/~aess/dataset/
* ~1 viewpoint (1 moving camera)
* Annotated - (we already cropped out the pedestrians)
* 146 people, ~8500 images
'''MuHAVi''': Multicamera Human Action Video Data http://dipersec.king.ac.uk/MuHAVi-MAS/
* Images - 8 viewpoints, 8 cameras, 17 actions/video-sequences
* Annotated (silhouettes) - 2 cameras, 2 actors, 2 actions/video-sequences
'''ViHASi''': Virtual Human Action Silhouette Data http://dipersec.king.ac.uk/VIHASI/ username: VIHASI passw: virtual$world
* 20 viewpoints, 9 virtual actors
* NOT annotated (but straight foward with MotionBuilder, it's virtual!)
'''BEHAVE''' Interactions Test Case Scenarios http://groups.inf.ed.ac.uk/vision/BEHAVEDATA/INTERACTIONS/
* 2 viewpoints (90º difference), 2 cameras
* Annotated - Bounding box and label (VIPER style)
* ~ 7 people (min 3, max 8 across sequences)


PETS2009 Dataset S2: People Tracking, Sparse Crowd http://www.cvg.rdg.ac.uk/PETS2010/a.html#s2
PETS2009 Dataset S2: People Tracking, Sparse Crowd http://www.cvg.rdg.ac.uk/PETS2010/a.html#s2
Line 33: Line 45:


ETISEO http://www-sop.inria.fr/orion/ETISEO/download.htm#video_data  
ETISEO http://www-sop.inria.fr/orion/ETISEO/download.htm#video_data  
* unknown - asked permission for access
* unknown - asked permission for access - '''Got permission, if anyone wants, ask me - Dario'''
* supposedly fully annotated´
* supposedly fully annotated´



Latest revision as of 11:21, 7 March 2012

CAVIAR http://homepages.inf.ed.ac.uk/rbf/CAVIAR/

  • 2 viewpoints (90º difference), 2 cameras
  • Annotated - Bounding box and label

VIPeR http://vision.soe.ucsc.edu/node/178

  • 2 viewpoints
  • "Annoteted" - already cropped ped images
  • 632 people, 2 images each

LRS http://lrs.icg.tugraz.at/download.php

  • 2 viewpoints
  • "Annotated" - already cropped 128x64 images
  • 200 people, ~200 images/person/camera (min 5)
  • +900 1-view people (for gallery)

3DPeS http://www.openvisor.org/3dpes.asp http://imagelab.ing.unimore.it/3DPeS/snapshots.rar

  • 8 viewpoints
  • "Annotated" - already cropped 606 pedestrian images
  • 200 people, 606 images total (min 2 images/person)

--

ETHZ http://www.vision.ee.ethz.ch/~aess/dataset/

  • ~1 viewpoint (1 moving camera)
  • Annotated - (we already cropped out the pedestrians)
  • 146 people, ~8500 images

MuHAVi: Multicamera Human Action Video Data http://dipersec.king.ac.uk/MuHAVi-MAS/

  • Images - 8 viewpoints, 8 cameras, 17 actions/video-sequences
  • Annotated (silhouettes) - 2 cameras, 2 actors, 2 actions/video-sequences

ViHASi: Virtual Human Action Silhouette Data http://dipersec.king.ac.uk/VIHASI/ username: VIHASI passw: virtual$world

  • 20 viewpoints, 9 virtual actors
  • NOT annotated (but straight foward with MotionBuilder, it's virtual!)

BEHAVE Interactions Test Case Scenarios http://groups.inf.ed.ac.uk/vision/BEHAVEDATA/INTERACTIONS/

  • 2 viewpoints (90º difference), 2 cameras
  • Annotated - Bounding box and label (VIPER style)
  • ~ 7 people (min 3, max 8 across sequences)

PETS2009 Dataset S2: People Tracking, Sparse Crowd http://www.cvg.rdg.ac.uk/PETS2010/a.html#s2

  • 4 viewpoints, 8 cameras
  • Sparse Crowd
  • Can't find annotation (but manual annotation was made for evaluation)

ETISEO http://www-sop.inria.fr/orion/ETISEO/download.htm#video_data

  • unknown - asked permission for access - Got permission, if anyone wants, ask me - Dario
  • supposedly fully annotated´

UMPM Benchmark http://www.projects.science.uu.nl/umpm/ username: umpmuser password: ahJaka4o

  • unknown - asked permissions for access
  • fully anotated - markers of body pose/position