Affordance imitation: Difference between revisions

From ISRWiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 15: Line 15:
We should add some kind of context to the on command (imitation or learning being the very basic).
We should add some kind of context to the on command (imitation or learning being the very basic).


* Gaze Control -> Behavior: read the current head state/position
* Query to Behavior -> "end" / "q"
* Query to Behavior -> "end" / "q"


Line 24: Line 25:
** intenity (max min). 2 int.  
** intenity (max min). 2 int.  


* Effect Detector to Query
* Camshiftplus format


* blobDescriptor -> query
* blobDescriptor -> query

Revision as of 13:40, 29 July 2009

Modules

This is the general architecture currently under development

(updated 28/07/09)


Ports and communication

The interface between modules is under development. The current version (subject to changes as we refine it) is as follows:

  • Behavior to AttentionSelection -> vocabs "on" / "off"
  • Behavior to Query -> vocabs "on" / "off"

We should add some kind of context to the on command (imitation or learning being the very basic).

  • Gaze Control -> Behavior: read the current head state/position
  • Query to Behavior -> "end" / "q"
  • Query to Effect Detector. The main objective of this port is to start the tracker at the object of interest. We need to send at least:
    • position (x,y) within the image. 2 doubles.
    • size (h,w). 2 doubles.
    • color histogram. TBD.
    • saturation parameters (max min). 2 int.
    • intenity (max min). 2 int.
  • Effect Detector to Query
  • Camshiftplus format
  • blobDescriptor -> query
    • Affordance descriptor. Same format as camshiftplus
    • tracker init data. histogram (could be different from affordance) + saturation + intensity
  • query -> object segmentation
    • vocab message: "do seg"
  • object segmentation -> blob descriptor
    • labelled image
    • raw image