DollarPedestrianDetectionCode: Difference between revisions

From ISRWiki
Jump to navigation Jump to search
 
(One intermediate revision by the same user not shown)
Line 17: Line 17:
# the res subdirectory, '''$code/data-INRIA/res''', containing more subdirectories with the results of the detections obtained running various algorithms
# the res subdirectory, '''$code/data-INRIA/res''', containing more subdirectories with the results of the detections obtained running various algorithms


Annotations are in the format: (x0, y0, deltaX, deltaY, confidence). The confidence value is used to plot the ROC curves.
The detections are stored in text files. There can be either one single file for one whole directory or one file for each image. In the first case the format for the file is: (image#, x0, y0, deltaX, deltaY, confidence). In the second case, the first field is missing. The confidence value is used to plot the ROC curves.
There is one annotation file for each image.
I don't know if the top-left corner is (0,0) or (1,1).
I don't know if the top-left corner is (0,0) or (1,1).


Line 104: Line 103:
(Be careful to select only one algorithm to evaluate, otherwise you will create at least tens of albums!)
(Be careful to select only one algorithm to evaluate, otherwise you will create at least tens of albums!)


   plotBb( res, rDir, 30, 'fp' ); %The third parameter tells the system how many pages of fals positives to save at most
   plotBb( res, rDir, 30, 'fp' ); %The third parameter tells the system how many pages of false positives to save at most
   plotBb( res, rDir, 0, 'tp' );  %If you want to save true positives
   plotBb( res, rDir, 0, 'tp' );  %If you want to save true positives
   plotBb( res, rDir, 30, 'fn' ); %The third parameter tells the system how many pages of fals negatives to save at most
   plotBb( res, rDir, 30, 'fn' ); %The third parameter tells the system how many pages of false negatives to save at most


==Various==
==Various==

Latest revision as of 17:40, 8 May 2014

Piotr Dollár et. al. collected the Caltech Pedestrian Detection Benchmark, a set of data and code to evaluate pedestrian detection algorithms. The data base consists of sequences of annotated images acquired by a camera mounted on a car. The authors provide the data, some code that allow others to test their detector on that data and the results of many state-of-the-art detectors, both in a synthetic form (ROC curves), as well as in the extensive form (the actual detection bounding boxes for each image). The authors also provide other data sets and results converted to match their format.

In order to have the system running you should download Piotr's Matlab Toolbox, the Evaluation/Labelling code and the data (image sequences, ground truth bounding boxes and detection results bounding boxes). You have link to these on the main page of the Caltech Pedestrian Detection Benchmark.
For the code to work you need to have Matlab with the Image Toolbox installed.

Getting ready

You should unpack the Piotr's Matlab Toolbox somewhere and add it to the Matlab Path:

 addpath(genpath('/home/matteo/PMT/')); savepath;

You should unpack the Evaluation/Labelling code in a directory that we will call $code (later, you should run Matlab from that directory).

All the files (image data bases, annotations, detection results) should be put in some standard subdirectories of $code/. You have to have one subdirectory for each db, for instance, you need one for INRIA: $code/data-INRIA/. The names are standard, they're defined in some matlab file.
Inside the particular db directory, you need to have:

  1. the video subdirectory, $code/data-INRIA/videos, containing more subdirectories and the ".seq" files with the images
  2. the annotations subdirectory, $code/data-INRIA/annotations, containing more subdirectories and the ground truth annotations
  3. the res subdirectory, $code/data-INRIA/res, containing more subdirectories with the results of the detections obtained running various algorithms

The detections are stored in text files. There can be either one single file for one whole directory or one file for each image. In the first case the format for the file is: (image#, x0, y0, deltaX, deltaY, confidence). In the second case, the first field is missing. The confidence value is used to plot the ROC curves. I don't know if the top-left corner is (0,0) or (1,1).

Operations you want to do

Set the db you're working on

 [pth,setIds,vidIds,skip,minHt] = dbInfo( 'inriatest' )

or simply:

 dbInfo( 'inriatest' )

Show db images with annotations

vbbPlayer

Compute the ROC curves for the specified algorithms, on the current db

dbEval

Display images with GT annotations, the detections of a specific algorithm and their evaluation (true positive, false positive, etc.)

dbBrowser

You should set some parameters in dbBrowser.m, such as which algorithm's results you want to plot and if you want to resize the detected bounding boxes. I am not sure when you want to resize the bounding boxes, if it depends on the algorithm you are testing or on the data base you are using.

 rPth=[pth '/res/HOG'];        % directory containing results
 thr=[];                       % detection threshold
 resize={100/128, 42/64, 0};   % controls resizing of detected bbs

This file writes plots in $code/results and data in $code/eval.

How to evaluate the results of your algorithm

You should repeat, for all the images in a given set:

  1. read one image
  2. run your classifier on it
  3. write the annotations in your results directory, e.g. $code/data-INRIA/res/myDetector/set01/V000/I00000.txt

Here's an example:

 close all, clear;
 
 %Add the directories where Maji code is stored to the path
 addpath MajiCode/ped-demo-fast-8x8x1.1/
 addpath MajiCode/libsvm-mat-2.84-1-fast.v3/
 
 %Set parameters for Maji's code
     % non max supression (similar to hog-detector)
     nmax_param.sw = 0.1;
     nmax_param.sh = 0.1;
     nmax_param.ss = 1.3;
     nmax_param.th = 0.0;
 
     % detector is run at this scaleratio with a stride of 8x8
     scaleratio = 2^(1/8);
 
     % load precomputed models
     load approx_models;
     approx_model_hard = approx_models{2}; 
     %two slightly different models (#training) 
 
 
 %Open the image sequence file (INRIA test set)
 seqReader = seqIo( 'data-INRIA/videos/set01/V000.seq', 'reader'); %The pointer initially points to image -1, after the first sequReader.next() it points to image 1
 outputDirectory = 'data-INRIA/res/MatteoMaji/set01/V000/'; %Set where to write the results
 info = seqReader.getinfo();
 nImages = info.numFrames;
 
 %Run the detector on each image, store bounding boxes results (and images?)
 for (imageNumber = 1 : nImages)
   seqReader.next();                          %Move pointer to the next image
   [image, timeStamp] = seqReader.getframe(); %Get current image
   [dr,ds] = run_detectorForDollar(image,approx_model_hard,scaleratio,nmax_param, outputDirectory, imageNumber-1); %Detect pedestrians (imageNumber is -1 because file numbers start from 0!)
 end

Once you have you detections, you can evaluate them with dbEval.m, I had to change some parameters like this:

 dataNames={'InriaTest'};  %Evaluate algorithms on INRIA test
 
 [...]
 
 case 'InriaTest'
     aIds=[7 15]; bnds=[]; %Evaluate only HIKSVM and my algorithm
 
 [...]
 
 case 15,    alg=def('MatteoMaji',1,id); %Define the name and directory for my algorithm
 %I'm not quite sure what the 1 means

After running dbEval.m, you can run dbBrowser.m and compare your detection bounding boxes with the ground truth ones, inspect whether or not the evaluation software considers a detection as successful, etc.

You can change some parameters in dbEval.m so that it creates albums showing the mistakes a specific algorithm makes. (Be careful to select only one algorithm to evaluate, otherwise you will create at least tens of albums!)

 plotBb( res, rDir, 30, 'fp' ); %The third parameter tells the system how many pages of false positives to save at most
 plotBb( res, rDir, 0, 'tp' );  %If you want to save true positives
 plotBb( res, rDir, 30, 'fn' ); %The third parameter tells the system how many pages of false negatives to save at most

Various

Read one or more imags from a ".seq" file and write it to a regular file

 Is = seqIo( 'data-INRIA/videos/set00/V000.seq', 'toImgs', '.',              1,                  0,               2,          'png' )
              input seq file                      command  dest directory    frames to skip      first frame   last frame     format

Create seq file from a directory of images

seq utility: help seqIo>frImags

  info.codec='jpg'; 
  info.fps=5;
  info.quality = 100;
  info.width =640;
  info.height =480;
  seqIo('cam51.seq','frImgs', info,'sDir','/home/athira/Desktop/HDA/data_labelling/toolbox/cam51', 'f0',0,'f1',3660  )
         fname       command  info  varargin

Note: The code expects the extention of the image as '.jpg' and not '.jpeg'. So, in case if you want to use jpeg, alter the seqIo.m in line 335.(frmStr)

Adding a new dataset WIP

In order to add a new dataset to the system, you should first of all create one or more seq files. So, create the root directory where this dataset will reside data-HDA, plus the three subdirectories data-HDA/videos, data-HDA/annotations and data-HDA/res.

Insert the entry relative to this dataset into the dbInfo.m script:

 case 'hda'
   setIds=0;      %ID's of sub datasets if there are some
   subdir='HDA';
   skip=1;        %number of frames used for subsampling during evaluation
   minHt=50;      %minimum labeled pedestrian height in dataset, not sure if this is used in the code somewhere or if it's just descriptive
   vidIds={0};    %video ID's inside each of the sub datasets

To create a seq file staring from a video, you can use the createSeqFile.m script. For some reason, the seq files created this way are not playable directly with vbbPlayer.m, but it can be labelled using vbbLabeler.m. You can download a short tutorial on how to use the labeller here: labelling demo.

The classes for the bounding boxes are person for a single person, person? for something the labeler is not sure whether it is a person or not, people for groups of people for which it is impossible to label each person correctly. There is a label for persons who are partly occluded, it's not so clear how it works.

Accessing the frames of a seq file

 %Initialize the reader
 seqReader = seqIo( 'cam60.seq', 'reader');
 %Read one frame
 seqReader.seek(269);
 image = seqReader.getframe();
 imshow(image);

Create new annotation (INRIA data set)

It seems impossible to do it with vbbLabeler.m': it complains about something, like the fact that the images are not all of the same size. So we might do that using bblabeler.m, which saves the annotations in a different format (one file per image).

 bbLabeler( [{'person', 'people'}], '~/PhD/Datasets/INRIAPerson/Train/pos/', 'inriaNewLabels');

The single annotation files are combined into one single vbb file by running:

 [B] = vbb( 'vbbFrFiles', 'NewAnnotations' )
 vbb('vbbSave', B, 'V000.vbb')

Transform all the images of a data set into a new version of it

Use the "convert" function on the seq file, applying imgFun(I) to each frame I:

   seqIo( fName, 'convert', tName, imgFun, varargin )