3D ball tracker: Difference between revisions

From ISRWiki
Jump to navigation Jump to search
No edit summary
 
(49 intermediate revisions by the same user not shown)
Line 1: Line 1:
This page contains information on how to build, set up and run '''PF3DTracker''', the particle-filter-based 3D ball tracker.
This page contains information on how to build, set up and run '''pf3dTracker''', the particle-filter-based 3D ball tracker, and '''pf3dBottomup''', the 3D ball detector.
The page is in the process of being written, so it's very incomplete and the information you find on it might be inaccurate.
Should you have any question or complaint about the tracker, please write Matteo Taiana an email at: mtaiana at isr*ist*utl*pt. Should you have questions or complaints about the bottom up module, please write Martim Brandão an email at: mbrandao at isr*ist*utl*pt.
The algorithm of the tracker is described in the paper: '''"Tracking objects with generic calibrated sensors: an algorithm based on color and 3D shape features"''', please cite it if you use the tracker in your research. If you use the bottom up module in your research, please let us know.


== System architecture and behaviour ==


== Get the source code ==
The following image shows a simplified version of the way the modules are connected together.
The source code of the tracker is not in the iCub repository at the moment, I need to make sure it can be built on Windows before I commit it.
Send me a mail at mtaiana at isr*ist*utl*pt and I'll give you the code.


The bottom up module detects balls in the input images and sends 3D hypotheses on the position of balls to the tracker.


== Build the tracker ==
The tracker module tracks one ball and produces in output a 3D estimate on the position of the ball in the reference frame of the camera.


=== Build the tracker on Linux ===
The frame transformer module (it's built together with the tracker) transforms the 3D coordinate to the root reference frame of the robot.
I'll call the root directory of the tracker code $PF3DTracker.
To build the tracker you need to enter $PF3DTracker, then run the commands:
  cmake .
  make
I also run a command to create links to the executable files in the $PF3DTracker directory:
  ln -s binary/* .


=== Build the tracker on Windows ===
Using the three modules together enables fast detection and robust tracking of a ball.
Sorry, I don't know how to do that so far.
Should you have any reason for that, the tracker can also be run without the bottom up module.


[[Image:TrackerDetectorInteraction.jpg]]


== Set the tracker up ==
For the tracker and bottom up modules to work well together, they should share the same colour and shape model for the ball, and the same camera model parameters. See [http://mediawiki.isr.ist.utl.pt/wiki/3D_ball_tracker#Configuring_the_modules_to_work_well_together below] for details on how to write the initialization files accordingly.
You need to create a '''colour model''' for the specific ball you want to track. This is done by grabbing images with the camera you want to use, cutting out the parts of the images where the ball is seen and pasting them all together in one file.
 
== Get and build the source code ==
The source code of tracker and detector is part of the iCub repository, [http://eris.liralab.it/wiki/Getting_the_iCub_software this page] explains how to get it, [http://eris.liralab.it/wiki/Manual#Six._Software.2C_Compiling_YARP_and_iCub this other one] how to build it.
Both modules depend on YARP and OpenCV, the tracker depends also on iKin, while the bottom up module depends also on iCubVis.
On this page you will encounter the variables $ICUB_ROOT and $ICUB_DIR. $ICUB_ROOT should point to the root of your copy of the iCub repository, while $ICUB_DIR should point to the directory where you build or install the binaries, have a look [http://eris.liralab.it/wiki/Environment here] for more information.
The source code of the tracker is contained in the directory: $ICUB_ROOT/main/src/modules/pf3dTracker, the one for the detector is in: $ICUB_ROOT/main/src/modules/pf3dBottomup. The binaries, after the building process, are stored in the directory: $ICUB_DIR/bin. You should be able to invoke them from any directory.
 
== Configuration ==
=== Configure the tracker ===
An example configuration comes with the iCub software, so you can test the tracker even without creating the models which are presented hereafter. Beware that the tracker will not work well without customized models.
 
For the tracker to work properly, you need to create a '''colour model''' for the specific ball you want to track. This is done by grabbing images with the camera you want to use, cutting out the parts of the images where the ball is seen and pasting them all together in one file.
The background of this image should be white, as white pixels are discarded when building the model histogram.
The background of this image should be white, as white pixels are discarded when building the model histogram.
The robustness of the tracker will depend on this model: you should include images in which the ball is seen under different lighting conditions. The more images you cut out, the better.
The robustness of the tracker will depend on this model: you should include images in which the ball is seen under different lighting conditions. The more images you cut out, the better.
If you can change the colour/brightness parameters of the camera, please do so before creating the colour model and use the same settings every time you use the tracker. Good setting include high saturation and a brightness that never makes parts of the ball appear as white or black.


The colour template image should look something like this:
Two examples of colour template images, for a yellow and a red ball, respectively:


[[Image:Tracker_color_template.jpg]]
[[Image:Tracker_color_template.jpg]]
[[Image:Tracker_color_template_2.jpg]]
Some images depicting the hand of the iCub robot were included in the template for the red ball, hoping this will improve the tracking when the ball is partially occluded by the hand of the robot.


You need to create a '''shape model''' for the specific ball you want to track. This is done using the Matlab script $PF3DTracker/matlab_files/write_initial_ball_points.m. You should set three parameters inside the script: R, R1 and R2. R is the radius of the ball you want to track, in millimetres. R1 and R2 are the radii that are used to project the inner and outer contour (see [[http://mediawiki.isr.ist.utl.pt/wiki/3D_ball_tracker#Theoretical_foundations_of_the_tracker]] for more details). If you want a precise estimate of the 3D position of the ball, you should set R1 and R2 close to the value of R (e.g. 10% difference). If you want the tracker to be able to withstand high accelerations of the ball, maintaining the number of particles used low, you should increase the difference up to 30% (this is the value I typically use).
You need to create a '''shape model''' for the specific ball you want to track. This is done using the Matlab script $ICUB_ROOT/main/src/modules/pf3dTracker/matlab_files/write_initial_ball_points.m. You should set three parameters inside the script: R, R1 and R2. R is the radius of the ball you want to track, in millimetres. R1 and R2 are the radii that are used to project the inner and outer contour (see [[http://mediawiki.isr.ist.utl.pt/wiki/3D_ball_tracker#Theoretical_foundations_of_the_tracker]] for more details). If you want a precise estimate of the 3D position of the ball, you should set R1 and R2 close to the value of R (e.g. 10% difference). If you want the tracker to be able to withstand high accelerations of the ball, maintaining the number of particles used low, you should increase the difference up to 30% (this is the value I typically use).
This script will create a file called something like: initial_ball_points_31mm_30percent.csv.
This script will create a file called something like: initial_ball_points_31mm_30percent.csv.


You need to create a '''dynamic model''' for the ball. Basically you have to fill in the dynamic matrix. I use a constant velocity model, with random acceleration. The data for this is stored in: models/motion_model_matrix.csv. I'm not sure that the tracker will work properly with other configurations of the motion model.
You need to create a '''dynamic model''' for the ball. Basically you have to fill in the dynamic matrix. I use a constant velocity model, with random acceleration. The data for this is stored in: models/motion_model_matrix.csv. I'm not sure that the tracker will work properly with other configurations of the motion model. For the dynamic model it is also quite important the parameter AccelStDev, that is set in the initialization file (see below).


You need to calibrate the camera you use (e.g. with camCalibConf).
You need to calibrate the camera you use i.e. estimate the intrinsic camera parameters. You can do that using camCalibConf, for example.  


You need to customize the file that sets the tracker up on start up: $PF3DTracker/initialization.ini. Here is an example:
You need to customize the file that sets the tracker up on start up. The default initialization file is $ICUB_ROOT/main/app/pf3dTracker/conf/pf3dTracker.ini. Here is an example:


   ####################################
   ####################################
   #configuration file for PF3DTracker#
   #configuration file for pf3dTracker#
   ####################################
   ####################################
    
    
Line 48: Line 59:
   #module name#
   #module name#
   #############
   #############
   name                        /icub/PF3DTracker
   name                        /pf3dTracker
 
  #############################
  #parameters of the algorithm#
  #############################
  nParticles                  900
  #nParticles                number of particles used
  accelStDev                  30
  #accelStDev                standard deviation of the acceleration noise
  insideOutsideDiffWeight    1.5
  #insideOutsideDiffWeight    inside-outside difference weight for the likelihood function
  colorTransfPolicy          1
  #colorTransfPolicy          [0=transform the whole image | 1=only transform the pixels you need]
    
    
    
    
Line 54: Line 77:
   #port names and function#
   #port names and function#
   #########################
   #########################
   inputVideoPort              /icub/PF3DTracker/videoIn
   inputVideoPort              /pf3dTracker/video:i
   #inputVideoPort            receives images from the grabber or the rectifying program.
   #inputVideoPort            receives images from the grabber or the rectifying program.
   outputVideoPort            /icub/PF3DTracker/videoOut
   outputVideoPort            /pf3dTracker/video:o
   #outputVideoPort            produces images in which the contour of the estimated ball is highlighted
   #outputVideoPort            produces images in which the contour of the estimated ball is highlighted.
   outputDataPort              /icub/PF3DTracker/dataOut
   outputDataPort              /pf3dTracker/data:o
   #outputDataPort            produces a stream of data in the format: X, Y, Z, likelihood, U, V, seeing_object
   #outputDataPort            produces a stream of data in the format: X, Y, Z [meters], likelihood, U, V [pixels], seeing_object.
   outputParticlePort          /icub/PF3DTracker/particleOut
  inputParticlePort          /pf3dTracker/particles:i
  #inputParticlePort          receives hypotheses on the position of the ball from the bottom up module
   outputParticlePort          /pf3dTracker/particles:o
   #outputParticlePort        produces data for the plotter. it is usually not active for performance reasons.
   #outputParticlePort        produces data for the plotter. it is usually not active for performance reasons.
   outputAttentionPort        /icub/PF3DTracker/attentionOut
   outputAttentionPort        /pf3dTracker/attention:o
   #outputAttentionPort        produces data for the attention system, in terms of a peak of saliency.
   #outputAttentionPort        produces data for the attention system, in terms of a peak of saliency.
    
    
Line 69: Line 94:
   #projection model and parameters#
   #projection model and parameters#
   #################################
   #################################
   #projectionModel [perspective|equidistance|unified]
   #projectionModel, only the perspective one was implemented so far.
   projectionModel            perspective
   projectionModel            perspective
    
    
   #iCubLisboaLeftEye_2009_03_04
   #iCubLisboaLeftEye_Zoom_Lens_2009_05_19
   perspectiveFx 217.934
  w 320
   perspectiveFy 218.24
  h 240
   perspectiveCx 185.282
   perspectiveFx 445.202
   perspectiveCy 121.498
   perspectiveFy 445.664
   perspectiveCx 188.297
   perspectiveCy 138.496
    
    
    
    
Line 82: Line 109:
   #tracked object models#
   #tracked object models#
   #######################
   #######################
   #trackedObjectType [sphere|parallelogram]
   #trackedObjectType, only sphere was implemented so far.
   trackedObjectType          sphere
   trackedObjectType          sphere
   trackedObjectColorTemplate  models/model.bmp
   trackedObjectColorTemplate  models/red_smiley_2009_07_02.bmp
   trackedObjectShapeTemplate  models/initial_ball_points_46mm_30percent.csv
   trackedObjectShapeTemplate  models/initial_ball_points_smiley_31mm_20percent.csv
 
   motionModelMatrix          models/motion_model_matrix.csv
   motionModelMatrix          models/motion_model_matrix.csv
  trackedObjectTemp          current_histogram.csv
    
    
    
    
Line 92: Line 121:
   #initialization method#
   #initialization method#
   #######################
   #######################
   #initialization method [search|3dEstimate|2dEstimate]
   #initialization method, only 3dEstimate was implemented so far.
   initializationMethod        3dEstimate
   initializationMethod        3dEstimate
  #initial position [meters]
   initialX                      0
   initialX                      0
   initialY                      0
   initialY                      0
   initialZ                     500
   initialZ                       0.5 
    
    
    
    
Line 102: Line 132:
   #visualization mode#
   #visualization mode#
   ####################
   ####################
  #only applies to the sphere.
   #circleVisualizationMode [0=inner and outer circle | 1=one circle with the correct radius]
   #circleVisualizationMode [0=inner and outer cirlce | 1=one circle with the correct radious] default 0.
  #default 0. only applies to the sphere.
   circleVisualizationMode 1
   circleVisualizationMode 1
    
    
 
  #################################
  #likelihood and reset condition #
  #################################
  #the tracker produces a value of likelihood at each time step.
  #this value can be used to infer if the object it is tracking is the correct one.
  #
  #if likelihood<=this value for 5 consecutive frames, the tracker
  #assumes it's not seeing the right object and is reinitialized.
  #
  likelihoodThreshold        0.005
    
    
   #########################
   #########################
   #attention-related stuff#
   #attention-related stuff#
   #########################
   #########################
  #the tracker produces a value of likelihood at each time step.
   attentionOutputMax         300
  #that value can be used to infer if the object it is tracking is the correct one.
   attentionOutputDecrease     0.99
  #this procedure is not very robust.
  #20Millions is a good threshold level when you have the right color model. 5M.
  likelihoodThreshold        5000000
   attentionOutputMax 300
   attentionOutputDecrease 0.99
    
    
    
    
Line 125: Line 161:
   saveImagesWithOpencv        false
   saveImagesWithOpencv        false
   #always use the trailing slash here.
   #always use the trailing slash here.
   saveImagesWithOpencvDir    ./yarp_result_images/
   saveImagesWithOpencvDir    ./graphical_results/
 
=== Configure the bottom up module ===
The default initialization file for configuring the detector is $ICUB_ROOT/main/app/pf3dBottomup/conf/pf3dBottomup.ini. Here is an example:
 
  nParticles 50 #number of generated particles
 
  maskVmin 15 #minimum acceptable pixel value
  maskVmax 256 #maximum acceptable pixel value
  maskSmin 70          #minimum acceptable pixel saturation
  Blur 1                #gaussian blur variance
 
  #ball shape (size) model
  sphereRadius 0.031 #radius of the ball in meters
 
  #ball colour model file
  trackedObjectColorTemplate  models/red_smiley_2009_07_02.bmp
 
  #projection model parameters:
  w 320
  h 240
  perspectiveFx 445.202
  perspectiveFy 445.664
  perspectiveCx 188.297
  perspectiveCy 138.496
 
=== Configuring the modules to work well together ===
You should make sure that the two modules use the same colour model for the ball, i.e., the parameter "trackedObjectColorTemplate", in the two initialization files, should point to the same file. A simple way of doing so is by putting a copy of the same file in both conf/models directories ($ICUB_ROOT/main/app/pf3dTracker/conf/models/ and $ICUB_ROOT/main/app/pf3dBottomup/conf/models/) and having the initialization files pointing at them:
 
Initialization file for the tracker (excerpt):
trackedObjectColorTemplate  models/red_smiley_2009_07_02.bmp
 
Initialization file for the detector (excerpt):
trackedObjectColorTemplate  models/red_smiley_2009_07_02.bmp
 
You should make sure that the projection model parameters are the same in both initialization files:
  w 320
  h 240
  perspectiveFx 445.202
  perspectiveFy 445.664
  perspectiveCx 188.297
  perspectiveCy 138.496
 
You should make sure that the size of the tracked ball is the same for both modules. In the initialization file for the tracker, you find the line:
  trackedObjectShapeTemplate  models/initial_ball_points_smiley_31mm_20percent.csv
which implies that the radius of the ball is 31mm. This should be matched by the following line in the initialization file for the detector:
  sphereRadius 0.031 #radius of the ball in meters
 
== Run tracker+bottom up ==
 
The tracker works best if the brightness/colour parameters of the camera are set at the same values they had at the time of the acquisition of the images that form the colour model.
To make sure of this, you should note down the values of the camera parameters at the time of the acquisition of such images and set the parameters again to those values before running the tracker (you can use frameGrabberGui for this).
 
Here is an example of such parameters:
  brightness 0
  sharpness  0.5
  white balance 0.648 0.474
  hue        0.482
  saturation 0.826
  gamma      0.400
  shutter    0.592
  gain      0.305
 
=== Run only tracker+bottom up ===
The XML template for running detector and tracker together is in: $ICUB_DIR/main/app/pf3dTracker/scripts/pf3dTrackerWithBottomup.xml.template. You need to change the "node" information in the XML file before you run it, to suite your computers' names.
 
  $ICUB_DIR/bin/manager.py $ICUB_DIR/main/app/pf3dTracker/scripts/myVersionOfpf3dTrackerWithBottomup.xml
 
This will open a window similar to this:


The number of particles used in the tracker should also be a parameter contained in this file and loaded at start up time by the tracker.
[[image:applicationManagerTrackerBottomup.jpg]]


== Run the tracker ==
Check the dependencies, run the modules and connect the ports.


To run the tracker you need to:
=== Run tracker+bottom up in the IIT/ISR grasping demo ===
  #run an image rectifier, in case you need it (cameras with a non-negligible distortion)
The XML template for running the grasping demo in: $ICUB_DIR/main/app/demoGrasp_IIT_ISR/scripts/demoGraspWithBottomup.xml.template. You need to change the "node" information in the XML file before you run it, as in the previous example. And, as in that example, once the GUI opens check the dependencies, run the modules and connect the ports.
  camCalib --file iCubLisboaLeftEye320x240_2009_03_04.ini --name /icub/camcalib/left
 
  #run the tracker itself
  ./PF3DTrackerMain --file initialization.ini
 
  #start a viewer
  yarpview /viewer
 
  #connect all the ports
  yarp connect /icub/cam/left /icub/camcalib/left/in
  yarp connect /icub/camcalib/left/out /icub/PF3DTracker/videoIn
  yarp connect /icub/PF3DTracker/videoOut /viewer


== Theoretical foundations of the tracker ==
== Theoretical foundations of the tracker ==
If you want to know more on the theoretical ideas behind the tracker, please have a look at the papers on this page: [http://welcome.isr.ist.utl.pt/people/index.asp?accao=showpeople&id_people=212].
If you want to know more on the theoretical ideas behind the tracker, please have a look at the papers on this page: [http://welcome.isr.ist.utl.pt/people/index.asp?accao=showpeople&id_people=212].


== Demo videos ==
== Demo videos ==
If you want to watch videos and evaluate the performance of the tracker, please have a look at this page: [http://users.isr.ist.utl.pt/~mtaiana/demos.html].
If you want to watch videos and evaluate the performance of the tracker, please have a look at this page: [http://users.isr.ist.utl.pt/~mtaiana/demos.html].


 
== ToDo ==
== ToDo: Tracker ==
;[[Image:empty_bullet.png]] Translate from Matlab to C++ the piece of code that computes the initial ball points and read the value of the radius from the initialization file :
;[[Image:empty_bullet.png]] Read the color/illumination parameters from the camera before starting the tracker; set the desired parameters of the camera, then start the tracker; restore the original parameters when quitting :
;[[Image:empty_bullet.png]] Make the tracker compute the histogram with Gaussian kernels instead of Dirac's :
;[[Image:empty_bullet.png]] Make the tracker quit gracefully when asked to, instead of requiring multiple ctrl-c's :
;[[Image:empty_bullet.png]] Document the code with Doxygen :
;[[Image:empty_bullet.png]] Add the "Expected behaviour" section to the wiki, where the desired behaviour of the tracker is described.
;[[Image:full_bullet_2.png]] Make the tracker adaptive to different image sizes :
;[[Image:full_bullet_2.png]] Make the tracker adaptive to different image sizes :
;[[Image:full_bullet_2.png]] Turn the number of particles into a parameter loaded at start time :
;[[Image:full_bullet_2.png]] Turn the number of particles into a parameter loaded at start time :
;[[Image:full_bullet_2.png]] Get rid of IPP dependency, using OpenCV :
;[[Image:full_bullet_2.png]] Get rid of IPP dependency, using OpenCV :
;[[Image:empty_bullet.png]] Build the tracker on Windows :
;[[Image:empty_bullet.png]] Document the code with doxygen :
;[[Image:full_bullet_2.png]] Start writing a wiki-based tutorial :
;[[Image:full_bullet_2.png]] (NOT NEEDED ANY MORE) Fix $PF3DTracker/conf/Find_IPP.cmake :
== ToDo: Wiki ==
;[[Image:full_bullet_2.png]] write about initialization.ini :
;[[Image:empty_bullet.png]] wait for feedback :


[[Category:Vislab]]
[[Category:Vislab]]

Latest revision as of 11:36, 5 April 2011

This page contains information on how to build, set up and run pf3dTracker, the particle-filter-based 3D ball tracker, and pf3dBottomup, the 3D ball detector. Should you have any question or complaint about the tracker, please write Matteo Taiana an email at: mtaiana at isr*ist*utl*pt. Should you have questions or complaints about the bottom up module, please write Martim Brandão an email at: mbrandao at isr*ist*utl*pt. The algorithm of the tracker is described in the paper: "Tracking objects with generic calibrated sensors: an algorithm based on color and 3D shape features", please cite it if you use the tracker in your research. If you use the bottom up module in your research, please let us know.

System architecture and behaviour

The following image shows a simplified version of the way the modules are connected together.

The bottom up module detects balls in the input images and sends 3D hypotheses on the position of balls to the tracker.

The tracker module tracks one ball and produces in output a 3D estimate on the position of the ball in the reference frame of the camera.

The frame transformer module (it's built together with the tracker) transforms the 3D coordinate to the root reference frame of the robot.

Using the three modules together enables fast detection and robust tracking of a ball. Should you have any reason for that, the tracker can also be run without the bottom up module.

For the tracker and bottom up modules to work well together, they should share the same colour and shape model for the ball, and the same camera model parameters. See below for details on how to write the initialization files accordingly.

Get and build the source code

The source code of tracker and detector is part of the iCub repository, this page explains how to get it, this other one how to build it. Both modules depend on YARP and OpenCV, the tracker depends also on iKin, while the bottom up module depends also on iCubVis. On this page you will encounter the variables $ICUB_ROOT and $ICUB_DIR. $ICUB_ROOT should point to the root of your copy of the iCub repository, while $ICUB_DIR should point to the directory where you build or install the binaries, have a look here for more information. The source code of the tracker is contained in the directory: $ICUB_ROOT/main/src/modules/pf3dTracker, the one for the detector is in: $ICUB_ROOT/main/src/modules/pf3dBottomup. The binaries, after the building process, are stored in the directory: $ICUB_DIR/bin. You should be able to invoke them from any directory.

Configuration

Configure the tracker

An example configuration comes with the iCub software, so you can test the tracker even without creating the models which are presented hereafter. Beware that the tracker will not work well without customized models.

For the tracker to work properly, you need to create a colour model for the specific ball you want to track. This is done by grabbing images with the camera you want to use, cutting out the parts of the images where the ball is seen and pasting them all together in one file. The background of this image should be white, as white pixels are discarded when building the model histogram. The robustness of the tracker will depend on this model: you should include images in which the ball is seen under different lighting conditions. The more images you cut out, the better. If you can change the colour/brightness parameters of the camera, please do so before creating the colour model and use the same settings every time you use the tracker. Good setting include high saturation and a brightness that never makes parts of the ball appear as white or black.

Two examples of colour template images, for a yellow and a red ball, respectively:

Some images depicting the hand of the iCub robot were included in the template for the red ball, hoping this will improve the tracking when the ball is partially occluded by the hand of the robot.

You need to create a shape model for the specific ball you want to track. This is done using the Matlab script $ICUB_ROOT/main/src/modules/pf3dTracker/matlab_files/write_initial_ball_points.m. You should set three parameters inside the script: R, R1 and R2. R is the radius of the ball you want to track, in millimetres. R1 and R2 are the radii that are used to project the inner and outer contour (see [[1]] for more details). If you want a precise estimate of the 3D position of the ball, you should set R1 and R2 close to the value of R (e.g. 10% difference). If you want the tracker to be able to withstand high accelerations of the ball, maintaining the number of particles used low, you should increase the difference up to 30% (this is the value I typically use). This script will create a file called something like: initial_ball_points_31mm_30percent.csv.

You need to create a dynamic model for the ball. Basically you have to fill in the dynamic matrix. I use a constant velocity model, with random acceleration. The data for this is stored in: models/motion_model_matrix.csv. I'm not sure that the tracker will work properly with other configurations of the motion model. For the dynamic model it is also quite important the parameter AccelStDev, that is set in the initialization file (see below).

You need to calibrate the camera you use i.e. estimate the intrinsic camera parameters. You can do that using camCalibConf, for example.

You need to customize the file that sets the tracker up on start up. The default initialization file is $ICUB_ROOT/main/app/pf3dTracker/conf/pf3dTracker.ini. Here is an example:

 ####################################
 #configuration file for pf3dTracker#
 ####################################
 
 
 #############
 #module name#
 #############
 name                        /pf3dTracker
 
 #############################
 #parameters of the algorithm#
 #############################
 nParticles                  900
 #nParticles                 number of particles used
 accelStDev                  30
 #accelStDev                 standard deviation of the acceleration noise
 insideOutsideDiffWeight     1.5
 #insideOutsideDiffWeight    inside-outside difference weight for the likelihood function
 colorTransfPolicy           1
 #colorTransfPolicy          [0=transform the whole image | 1=only transform the pixels you need]
 
 
 #########################
 #port names and function#
 #########################
 inputVideoPort              /pf3dTracker/video:i
 #inputVideoPort             receives images from the grabber or the rectifying program.
 outputVideoPort             /pf3dTracker/video:o
 #outputVideoPort            produces images in which the contour of the estimated ball is highlighted.
 outputDataPort              /pf3dTracker/data:o
 #outputDataPort             produces a stream of data in the format: X, Y, Z [meters], likelihood, U, V [pixels], seeing_object.
 inputParticlePort           /pf3dTracker/particles:i
 #inputParticlePort          receives hypotheses on the position of the ball from the bottom up module
 outputParticlePort          /pf3dTracker/particles:o
 #outputParticlePort         produces data for the plotter. it is usually not active for performance reasons.
 outputAttentionPort         /pf3dTracker/attention:o
 #outputAttentionPort        produces data for the attention system, in terms of a peak of saliency.
 
 
 #################################
 #projection model and parameters#
 #################################
 #projectionModel, only the perspective one was implemented so far.
 projectionModel             perspective
 
 #iCubLisboaLeftEye_Zoom_Lens_2009_05_19
 w 320
 h 240
 perspectiveFx 445.202
 perspectiveFy 445.664
 perspectiveCx 188.297
 perspectiveCy 138.496
 
 
 #######################
 #tracked object models#
 #######################
 #trackedObjectType, only sphere was implemented so far.
 trackedObjectType           sphere
 trackedObjectColorTemplate  models/red_smiley_2009_07_02.bmp
 trackedObjectShapeTemplate  models/initial_ball_points_smiley_31mm_20percent.csv
 
 motionModelMatrix           models/motion_model_matrix.csv
 trackedObjectTemp           current_histogram.csv
 
 
 #######################
 #initialization method#
 #######################
 #initialization method, only 3dEstimate was implemented so far.
 initializationMethod        3dEstimate
 #initial position [meters]
 initialX                       0
 initialY                       0
 initialZ                       0.5  
 
 
 ####################
 #visualization mode#
 ####################
 #circleVisualizationMode	[0=inner and outer circle | 1=one circle with the correct radius]
 #default 0. only applies to the sphere.
 circleVisualizationMode	1
 
 
 #################################
 #likelihood and reset condition #
 #################################
 #the tracker produces a value of likelihood at each time step.
 #this value can be used to infer if the object it is tracking is the correct one.
 #
 #if likelihood<=this value for 5 consecutive frames, the tracker
 #assumes it's not seeing the right object and is reinitialized.
 #
 likelihoodThreshold         0.005
 
 #########################
 #attention-related stuff#
 #########################
 attentionOutputMax          300
 attentionOutputDecrease     0.99
 
 
 ##########################
 #image saving preferences#
 ##########################
 #save images with OpenCV?
 saveImagesWithOpencv        false
 #always use the trailing slash here.
 saveImagesWithOpencvDir     ./graphical_results/

Configure the bottom up module

The default initialization file for configuring the detector is $ICUB_ROOT/main/app/pf3dBottomup/conf/pf3dBottomup.ini. Here is an example:

 nParticles 50		#number of generated particles
 
 maskVmin 15		#minimum acceptable pixel value
 maskVmax 256		#maximum acceptable pixel value
 maskSmin 70           #minimum acceptable pixel saturation
 Blur 1                #gaussian blur variance
 
 #ball shape (size) model
 sphereRadius 0.031	#radius of the ball in meters
 
 #ball colour model file
 trackedObjectColorTemplate  models/red_smiley_2009_07_02.bmp
 
 #projection model parameters:
 w 320
 h 240
 perspectiveFx 445.202
 perspectiveFy 445.664
 perspectiveCx 188.297
 perspectiveCy 138.496

Configuring the modules to work well together

You should make sure that the two modules use the same colour model for the ball, i.e., the parameter "trackedObjectColorTemplate", in the two initialization files, should point to the same file. A simple way of doing so is by putting a copy of the same file in both conf/models directories ($ICUB_ROOT/main/app/pf3dTracker/conf/models/ and $ICUB_ROOT/main/app/pf3dBottomup/conf/models/) and having the initialization files pointing at them:

Initialization file for the tracker (excerpt):

trackedObjectColorTemplate  models/red_smiley_2009_07_02.bmp

Initialization file for the detector (excerpt):

trackedObjectColorTemplate  models/red_smiley_2009_07_02.bmp

You should make sure that the projection model parameters are the same in both initialization files:

 w 320
 h 240
 perspectiveFx 445.202
 perspectiveFy 445.664
 perspectiveCx 188.297
 perspectiveCy 138.496

You should make sure that the size of the tracked ball is the same for both modules. In the initialization file for the tracker, you find the line:

 trackedObjectShapeTemplate  models/initial_ball_points_smiley_31mm_20percent.csv

which implies that the radius of the ball is 31mm. This should be matched by the following line in the initialization file for the detector:

 sphereRadius 0.031	#radius of the ball in meters

Run tracker+bottom up

The tracker works best if the brightness/colour parameters of the camera are set at the same values they had at the time of the acquisition of the images that form the colour model. To make sure of this, you should note down the values of the camera parameters at the time of the acquisition of such images and set the parameters again to those values before running the tracker (you can use frameGrabberGui for this).

Here is an example of such parameters:

 brightness 0
 sharpness  0.5
 white balance 0.648 0.474
 hue        0.482
 saturation 0.826
 gamma      0.400
 shutter    0.592
 gain       0.305

Run only tracker+bottom up

The XML template for running detector and tracker together is in: $ICUB_DIR/main/app/pf3dTracker/scripts/pf3dTrackerWithBottomup.xml.template. You need to change the "node" information in the XML file before you run it, to suite your computers' names.

  $ICUB_DIR/bin/manager.py $ICUB_DIR/main/app/pf3dTracker/scripts/myVersionOfpf3dTrackerWithBottomup.xml

This will open a window similar to this:

Check the dependencies, run the modules and connect the ports.

Run tracker+bottom up in the IIT/ISR grasping demo

The XML template for running the grasping demo in: $ICUB_DIR/main/app/demoGrasp_IIT_ISR/scripts/demoGraspWithBottomup.xml.template. You need to change the "node" information in the XML file before you run it, as in the previous example. And, as in that example, once the GUI opens check the dependencies, run the modules and connect the ports.

Theoretical foundations of the tracker

If you want to know more on the theoretical ideas behind the tracker, please have a look at the papers on this page: [2].

Demo videos

If you want to watch videos and evaluate the performance of the tracker, please have a look at this page: [3].

ToDo

Translate from Matlab to C++ the piece of code that computes the initial ball points and read the value of the radius from the initialization file
Read the color/illumination parameters from the camera before starting the tracker; set the desired parameters of the camera, then start the tracker; restore the original parameters when quitting
Make the tracker compute the histogram with Gaussian kernels instead of Dirac's
Make the tracker quit gracefully when asked to, instead of requiring multiple ctrl-c's
Document the code with Doxygen
Add the "Expected behaviour" section to the wiki, where the desired behaviour of the tracker is described.
Make the tracker adaptive to different image sizes
Turn the number of particles into a parameter loaded at start time
Get rid of IPP dependency, using OpenCV