Multiple View Tracking: Difference between revisions
No edit summary |
(added VIPeR) |
||
Line 1: | Line 1: | ||
'''CAVIAR''' http://homepages.inf.ed.ac.uk/rbf/CAVIAR/ | '''CAVIAR''' http://homepages.inf.ed.ac.uk/rbf/CAVIAR/ | ||
* 2 viewpoints (90º difference), 2 cameras | * 2 viewpoints (90º difference), 2 cameras | ||
* Annotated - Bounding box and label | * Annotated - Bounding box and label | ||
'''VIPeR''' http://vision.soe.ucsc.edu/node/178 | |||
* 2 viewpoints | |||
* "Annoteted" - already cropped ped images | |||
* 632 people, 2 images each | |||
'''LRS''' http://lrs.icg.tugraz.at/download.php | '''LRS''' http://lrs.icg.tugraz.at/download.php | ||
Line 26: | Line 18: | ||
* "Annotated" - already cropped 606 pedestrian images | * "Annotated" - already cropped 606 pedestrian images | ||
* 200 people, 606 images total (min 2 images/person) | * 200 people, 606 images total (min 2 images/person) | ||
'''MuHAVi''': Multicamera Human Action Video Data http://dipersec.king.ac.uk/MuHAVi-MAS/ | |||
* Images - 8 viewpoints, 8 cameras, 17 actions/video-sequences | |||
* Annotated (silhouettes) - 2 cameras, 2 actors, 2 actions/video-sequences | |||
'''ViHASi''': Virtual Human Action Silhouette Data http://dipersec.king.ac.uk/VIHASI/ username: VIHASI passw: virtual$world | |||
* 20 viewpoints, 9 virtual actors | |||
* NOT annotated (but straight foward with MotionBuilder, it's virtual!) | |||
'''BEHAVE''' Interactions Test Case Scenarios http://groups.inf.ed.ac.uk/vision/BEHAVEDATA/INTERACTIONS/ | |||
* 2 viewpoints (90º difference), 2 cameras | |||
* Annotated - Bounding box and label (VIPER style) | |||
* ~ 7 people (min 3, max 8 across sequences) | |||
PETS2009 Dataset S2: People Tracking, Sparse Crowd http://www.cvg.rdg.ac.uk/PETS2010/a.html#s2 | PETS2009 Dataset S2: People Tracking, Sparse Crowd http://www.cvg.rdg.ac.uk/PETS2010/a.html#s2 |
Revision as of 15:42, 2 March 2012
CAVIAR http://homepages.inf.ed.ac.uk/rbf/CAVIAR/
- 2 viewpoints (90º difference), 2 cameras
- Annotated - Bounding box and label
VIPeR http://vision.soe.ucsc.edu/node/178
- 2 viewpoints
- "Annoteted" - already cropped ped images
- 632 people, 2 images each
LRS http://lrs.icg.tugraz.at/download.php
- 2 viewpoints
- "Annotated" - already cropped 128x64 images
- 200 people, ~200 images/person/camera (min 5)
- +900 1-view people (for gallery)
3DPeS http://www.openvisor.org/3dpes.asp http://imagelab.ing.unimore.it/3DPeS/snapshots.rar
- 8 viewpoints
- "Annotated" - already cropped 606 pedestrian images
- 200 people, 606 images total (min 2 images/person)
MuHAVi: Multicamera Human Action Video Data http://dipersec.king.ac.uk/MuHAVi-MAS/
- Images - 8 viewpoints, 8 cameras, 17 actions/video-sequences
- Annotated (silhouettes) - 2 cameras, 2 actors, 2 actions/video-sequences
ViHASi: Virtual Human Action Silhouette Data http://dipersec.king.ac.uk/VIHASI/ username: VIHASI passw: virtual$world
- 20 viewpoints, 9 virtual actors
- NOT annotated (but straight foward with MotionBuilder, it's virtual!)
BEHAVE Interactions Test Case Scenarios http://groups.inf.ed.ac.uk/vision/BEHAVEDATA/INTERACTIONS/
- 2 viewpoints (90º difference), 2 cameras
- Annotated - Bounding box and label (VIPER style)
- ~ 7 people (min 3, max 8 across sequences)
PETS2009 Dataset S2: People Tracking, Sparse Crowd http://www.cvg.rdg.ac.uk/PETS2010/a.html#s2
- 4 viewpoints, 8 cameras
- Sparse Crowd
- Can't find annotation (but manual annotation was made for evaluation)
ETISEO http://www-sop.inria.fr/orion/ETISEO/download.htm#video_data
- unknown - asked permission for access
- supposedly fully annotated´
UMPM Benchmark http://www.projects.science.uu.nl/umpm/ username: umpmuser password: ahJaka4o
- unknown - asked permissions for access
- fully anotated - markers of body pose/position