DollarPedestrianDetectionCode: Difference between revisions

From ISRWiki
Jump to navigation Jump to search
Line 98: Line 98:


After running '''dbEval.m''', you can run '''dbBrowser.m''' and compare your detection bounding boxes with the ground truth ones, inspect whether or not the evaluation software considers a detection as successful, etc.
After running '''dbEval.m''', you can run '''dbBrowser.m''' and compare your detection bounding boxes with the ground truth ones, inspect whether or not the evaluation software considers a detection as successful, etc.
You can change some parameters in '''dbEval.m''' so that it creates albums showing the mistakes a specific algorithm makes.
(Be careful to select only one algorithm to evaluate, otherwise you will create at least tens of albums!)
  plotBb( res, rDir, 30, 'fp' ); %The third parameter tells the system how many pages of fals positives to save at most
  plotBb( res, rDir, 0, 'tp' );  %If you want to save true positives
  plotBb( res, rDir, 30, 'fn' ); %The third parameter tells the system how many pages of fals negatives to save at most


=== Read one or more imags from a ".seq" file and write it to a regular file ===
=== Read one or more imags from a ".seq" file and write it to a regular file ===
   Is = seqIo( 'data-INRIA/videos/set00/V000.seq', 'toImgs', '.',              1,                  0,              2,          'png' )
   Is = seqIo( 'data-INRIA/videos/set00/V000.seq', 'toImgs', '.',              1,                  0,              2,          'png' )
               input seq file                      command  dest directory    frames to skip      first frame  last frame    format
               input seq file                      command  dest directory    frames to skip      first frame  last frame    format

Revision as of 19:38, 13 January 2012

Piotr Dollár et. al. collected the Caltech Pedestrian Detection Benchmark, a set of data and code to evaluate pedestrian detection algorithms. The data base consists of sequences of annotated images acquired by a camera mounted on a car. The authors provide the data, some code that allow others to test their detector on that data and the results of many state-of-the-art detectors, both in a synthetic form (ROC curves), as well as in the extensive form (the actual detection bounding boxes for each image). The authors also provide other data sets and results converted to match their format.

In order to have the system running you should download Piotr's Matlab Toolbox, the Evaluation/Labelling code and the data (image sequences, ground truth bounding boxes and detection results bounding boxes). You have link to these on the main page of the Caltech Pedestrian Detection Benchmark.
For the code to work you need to have Matlab with the Image Toolbox installed.

Getting ready

You should unpack the Piotr's Matlab Toolbox somewhere and add it to the Matlab Path:

 addpath(genpath('/home/matteo/PMT/')); savepath;

You should unpack the Evaluation/Labelling code in a directory that we will call $code (later, you should run Matlab from that directory).

All the files (image data bases, annotations, detection results) should be put in some standard subdirectories of $code/. You have to have one subdirectory for each db, for instance, you need one for INRIA: $code/data-INRIA/. The names are standard, they're defined in some matlab file.
Inside the particular db directory, you need to have:

  1. the video subdirectory, $code/data-INRIA/videos, containing more subdirectories and the ".seq" files with the images
  2. the annotations subdirectory, $code/data-INRIA/annotations, containing more subdirectories and the ground truth annotations
  3. the res subdirectory, $code/data-INRIA/res, containing more subdirectories with the results of the detections obtained running various algorithms

Annotations are in the format: (x0, y0, deltaX, deltaY, confidence). The confidence value is used to plot the ROC curves. There is one annotation file for each image. I don't know if the top-left corner is (0,0) or (1,1).

Operations you want to do

Set the db you're working on

[pth,setIds,vidIds,skip,minHt] = dbInfo( 'inriatest' )

Show db images with annotations

vbbPlayer

Compute the ROC curves for the specified algorithms, on the current db

dbEval

Display images with GT annotations, the detections of a specific algorithm and their evaluation (true positive, false positive, etc.)

dbBrowser

You should set some parameters in dbBrowser.m, such as which algorithm's results you want to plot and if you want to resize the detected bounding boxes. I am not sure when you want to resize the bounding boxes, if it depends on the algorithm you are testing or on the data base you are using.

 rPth=[pth '/res/HOG'];        % directory containing results
 thr=[];                       % detection threshold
 resize={100/128, 42/64, 0};   % controls resizing of detected bbs

This file writes plots in $code/results and data in $code/eval.

How to evaluate the results of your algorithm

You should repeat, for all the images in a given set:

  1. read one image
  2. run your classifier on it
  3. write the annotations in your results directory, e.g. $code/data-INRIA/res/myDetector/set01/V000/I00000.txt

Here's an example:

 close all, clear;
 
 %Add the directories where Maji code is stored to the path
 addpath MajiCode/ped-demo-fast-8x8x1.1/
 addpath MajiCode/libsvm-mat-2.84-1-fast.v3/
 
 %Set parameters for Maji's code
     % non max supression (similar to hog-detector)
     nmax_param.sw = 0.1;
     nmax_param.sh = 0.1;
     nmax_param.ss = 1.3;
     nmax_param.th = 0.0;
 
     % detector is run at this scaleratio with a stride of 8x8
     scaleratio = 2^(1/8);
 
     % load precomputed models
     load approx_models;
     approx_model_hard = approx_models{2}; 
     %two slightly different models (#training) 
 
 
 %Open the image sequence file (INRIA test set)
 seqReader = seqIo( 'data-INRIA/videos/set01/V000.seq', 'reader'); %The pointer initially points to image -1, after the first sequReader.next() it points to image 1
 outputDirectory = 'data-INRIA/res/MatteoMaji/set01/V000/'; %Set where to write the results
 info = seqReader.getinfo();
 nImages = info.numFrames;
 
 %Run the detector on each image, store bounding boxes results (and images?)
 for (imageNumber = 1 : nImages)
   seqReader.next();                          %Move pointer to the next image
   [image, timeStamp] = seqReader.getframe(); %Get current image
   [dr,ds] = run_detectorForDollar(image,approx_model_hard,scaleratio,nmax_param, outputDirectory, imageNumber-1); %Detect pedestrians (imageNumber is -1 because file numbers start from 0!)
 end

Once you have you detections, you can evaluate them with dbEval.m, I had to change some parameters like this:

 dataNames={'InriaTest'};  %Evaluate algorithms on INRIA test
 
 [...]
 
 case 'InriaTest'
     aIds=[7 15]; bnds=[]; %Evaluate only HIKSVM and my algorithm
 
 [...]
 
 case 15,    alg=def('MatteoMaji',1,id); %Define the name and directory for my algorithm
 %I'm not quite sure what the 1 means

After running dbEval.m, you can run dbBrowser.m and compare your detection bounding boxes with the ground truth ones, inspect whether or not the evaluation software considers a detection as successful, etc.

You can change some parameters in dbEval.m so that it creates albums showing the mistakes a specific algorithm makes. (Be careful to select only one algorithm to evaluate, otherwise you will create at least tens of albums!)

 plotBb( res, rDir, 30, 'fp' ); %The third parameter tells the system how many pages of fals positives to save at most
 plotBb( res, rDir, 0, 'tp' );  %If you want to save true positives
 plotBb( res, rDir, 30, 'fn' ); %The third parameter tells the system how many pages of fals negatives to save at most

Read one or more imags from a ".seq" file and write it to a regular file

 Is = seqIo( 'data-INRIA/videos/set00/V000.seq', 'toImgs', '.',              1,                  0,               2,          'png' )
              input seq file                      command  dest directory    frames to skip      first frame   last frame     format