Affordance imitation: Difference between revisions

From ISRWiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 18: Line 18:


* Query to Effect Detector. The main objective of this port is to start the tracker at the object of interest. We need to send at least:
* Query to Effect Detector. The main objective of this port is to start the tracker at the object of interest. We need to send at least:
** position (x,y) within the image. 2 doubles.
** size (h,w). 2 doubles.
** color histogram. TBD.
** saturation parameters (max min). 2 int.
** intenity (max min). 2 int.


** position (x,y) within the image
** size (h,w)
** color histogrambs
** saturation parameters


*  
* blobDescriptor -> query
** Affordance descriptor. Same format as camshiftplus
** tracker init data. histogram (could be different from affordance) + saturation + intensity


 
* query -> object segmentation
  ok &= port_behavior_out.open("/demoAffv2/behavior:o");
** vocab message: "do seg"


ok &= port_eff.open("/demoAffv2/effect");
* object segmentation -> blob descriptor
  ok &= port_sync.open("/demoAffv2/synccamshift");
** labelled image
  ok &= port_descriptor.open("/demoAffv2/objsdesc");
** raw image
  ok &= port_askobj.open("/demoAffv2/");
  ok &= port_primitives.open("/demoAffv2/motioncmd");
  ok &= port_gaze.open("/demoAffv2/gazecmd");
  ok &= port_output.open("/demoAffv2/out");
  ok &= port_emotions.open("/demoAffv2/out");

Revision as of 13:33, 29 July 2009

Modules

This is the general architecture currently under development

(updated 28/07/09)


Ports and communication

The interface between modules is under development. The current version (subject to changes as we refine it) is as follows:

  • Behavior to AttentionSelection -> vocabs "on" / "off"
  • Behavior to Query -> vocabs "on" / "off"

We should add some kind of context to the on command (imitation or learning being the very basic).

  • Query to Behavior -> "end" / "q"
  • Query to Effect Detector. The main objective of this port is to start the tracker at the object of interest. We need to send at least:
    • position (x,y) within the image. 2 doubles.
    • size (h,w). 2 doubles.
    • color histogram. TBD.
    • saturation parameters (max min). 2 int.
    • intenity (max min). 2 int.


  • blobDescriptor -> query
    • Affordance descriptor. Same format as camshiftplus
    • tracker init data. histogram (could be different from affordance) + saturation + intensity
  • query -> object segmentation
    • vocab message: "do seg"
  • object segmentation -> blob descriptor
    • labelled image
    • raw image