Vislab: Difference between revisions
Jump to navigation
Jump to search
m (Added an item called iCub joints to Resources) |
|||
(104 intermediate revisions by 8 users not shown) | |||
Line 1: | Line 1: | ||
Institutional information: | |||
http://vislab.isr.ist.utl.pt | |||
Our YouTube channel, containing nice videos and demonstrations: | |||
http://www.youtube.com/user/VislabLisboa | |||
Our internal video page on this wiki: | |||
[[VisLab Videos]] | |||
== Research Topics == | |||
* Machine Learning | |||
* Computer Vision | |||
== Projects == | == Projects == | ||
=== | === Current projects === | ||
* POETICON++ - Robots Need Language: A computational mechanism for generalisation and generation of new behaviours in robots (EC FP7, Jan. 2012 - Dec. 2015) | |||
** http://www.poeticon.eu/ | |||
** The main objective of POETICON++ is the development of a computational mechanism for generalisation of motor programs and visual experiences for robots. To this end, it will integrate natural language and visual action/object recognition tools with motor skills and learning abilities, in the iCub humanoid. Tools and skills will engage in a cognitive dialogue for novel action generalisation and creativity experiments in two scenarios of "everyday activities", comprising of (a) behaviour generation through verbal instruction, and (b) visual scene understanding. POETICON++ views natural language as a necessary tool for endowing artificial agents with generalisation and creativity in real world environments. | |||
* Dico(re)²s - Discount Coupon Recommendation and Redemption System (EC FP7, July 2011 - June 2013) | |||
** http://www.dicore2s.com/ | |||
** Dico(re)²s develops and deploys a coupon-based discount campaign platform to provide consumers and retailers/manufacturers with a personalized environment for maximum customer satisfaction and business profitability. | |||
* First-MM - Flexible Skill Acquisition and Intuitive Robot Tasking for Mobile Manipulation (EC FP7, Feb. 2010 - Jul. 2013) | |||
** http://www.first-mm.eu/ | |||
** The goal of First-MM is to build the basis for a new generation of autonomous mobile manipulation robots that can flexibly be instructed to perform complex manipulation and transportation tasks. The project will develop a novel robot programming environment that allows even non-expert users to specify complex manipulation tasks in real-world environments. In addition to a task specification language, the environment includes concepts for probabilistic inference and for learning manipulation skills from demonstration and from experience. | |||
=== Past projects === | |||
* RoboSoM - A Robotic Sense of Movement (EC FP7, Dec. 2009 - Dec. 2012) | |||
** http://www.robosom.eu/ | |||
** This project aims at advancing the state-of-the-art in motion perception and control in a humanoid robot. The fundamental principles to explore are rooted on theories of human perception: Expected Perception (EP) and the Vestibular Unified Reference Frame. | |||
* | * HANDLE - Developmental Pathway Towards Autonomy and Dexterity in Robot In-Hand Manipulation (EC FP7, Feb. 2009 - Feb. 2013) | ||
** http://www.handle-project.eu | ** http://www.handle-project.eu | ||
* | ** This project aims at providing advanced perception and control capabilities to the Shadow Robot hand, one of the most advanced robotic hands in mechanical terms. We follow some paradigms of human learning to make the system able to grasp and manipulate objects of different characteristics: learning by imitation and by self-exploration. Different object characteristics and usages (object affordances) determine the way the hand will perform the grasping and manipulation actions. | ||
* URUS - Ubiquitous Networking Robotics in Urban Settings (EC FP6, Dec. 2006 - Nov. 2009) | |||
** http://urus.upc.es | |||
* | * RobotCub - Robotic Open-Architecture Technology for Cognition, Understanding and Behaviour (EC FP6, Sept. 2004 - Jan. 2010) | ||
* Mirror | ** http://www.robotcub.org | ||
* CAVIAR - Context-Aware Vision Using Image-Based Active Recognition (EC FP6, 2002 - 2005) | |||
** http://homepages.inf.ed.ac.uk/rbf/CAVIAR/ | |||
* MIRROR - Mirror Neurons for Recognition (EC FP5, 2001 - 2004) | |||
== Material == | |||
* [[VisLab book wishlist]] | |||
* [[VisLab calendar]] | |||
* [[Vislab list of journals]] | |||
* [[VisLab slides template and logos]] | |||
== Robots == | == Robots == | ||
* [[Baltazar]] | * [[Baltazar]] | ||
* [[Chico]] ( | * [[Chico]] (iCubLisboa01) | ||
* [[Chica]] | * [[Chica]] | ||
* [[Chico head]] | |||
* [[Vizzy]] | * [[Vizzy]] | ||
* [[Nao]] | |||
* [[Darwin]] | |||
== | == Other resources == | ||
=== Blackhole network storage === | |||
You can store your work and backups on blackhole (10.0.3.118). As of 2013, this disk replaced the old one europa_hd (10.0.3.117). | |||
=== Cortex cluster === | |||
''For information on the setup of this cluster, see [[Cortex]].'' | |||
* [[ | === Cameras === | ||
* [[ | * [[Nickon5000D]] photo camera | ||
* [[Flea]] firewire camera | |||
== Demos == | === Demos === | ||
* [[Binocular Head]] | |||
* [[Imitation]] | * [[Imitation]] | ||
* [[Surveillance]] | * [[Surveillance]] | ||
=== iCubBrain cluster === | |||
''For information on the setup of this cluster, see [[iCubBrain]].'' | |||
=== Network === | |||
''See the [[VisLab network]] article.'' | |||
=== Software repositories === | |||
Git repositories at: | |||
https://github.com/vislab-tecnico-lisboa | |||
Github repositories guidelines: | |||
* Repository name: All the characters must be lower case and use underscore to separate words | |||
* Repository name: Avoid the usage of non-letter characters in the name, including "-" | |||
* Repository description is mandatory | |||
* README.md is mandatory | |||
* The WIKI of the repository is highly encouraged to use, in case the README.md becomes a very large file | |||
Old SVN repository at: | |||
svn://svn.isr.ist.utl.pt/vislab | |||
== Tutorials == | === Tutorials === | ||
* [[3D ball tracker]] | * [[3D ball tracker]] | ||
* [[OpenRAVE Tutorial]] | |||
* [[ROS Tutorial]] | |||
* [[DollarPedestrianDetectionCode | Caltech Pedestrian Detection database and code]] | |||
* [[FelzenszwalbDetectionCode | Object Detection code by Felzenszwalb, Girshick, McAllester, Ramanan]] | |||
* [[GitCentralizedWorkflow | Using Git with a centralized workflow]] | |||
== | === Useful links === | ||
* | * [[Checklist for new VisLab members]] | ||
== | == VisLab category == | ||
The page Category:Vislab ( | The page ''Category:Vislab'' (linked below) lists all pages related to VisLab. | ||
[[Category:Learning]] | [[Category:Learning]] | ||
[[Category:Robots]] | [[Category:Robots]] | ||
[[Category:Vislab]] | [[Category:Vislab]] |
Latest revision as of 23:10, 4 October 2017
Institutional information: http://vislab.isr.ist.utl.pt
Our YouTube channel, containing nice videos and demonstrations: http://www.youtube.com/user/VislabLisboa
Our internal video page on this wiki: VisLab Videos
Research Topics
- Machine Learning
- Computer Vision
Projects
Current projects
- POETICON++ - Robots Need Language: A computational mechanism for generalisation and generation of new behaviours in robots (EC FP7, Jan. 2012 - Dec. 2015)
- http://www.poeticon.eu/
- The main objective of POETICON++ is the development of a computational mechanism for generalisation of motor programs and visual experiences for robots. To this end, it will integrate natural language and visual action/object recognition tools with motor skills and learning abilities, in the iCub humanoid. Tools and skills will engage in a cognitive dialogue for novel action generalisation and creativity experiments in two scenarios of "everyday activities", comprising of (a) behaviour generation through verbal instruction, and (b) visual scene understanding. POETICON++ views natural language as a necessary tool for endowing artificial agents with generalisation and creativity in real world environments.
- Dico(re)²s - Discount Coupon Recommendation and Redemption System (EC FP7, July 2011 - June 2013)
- http://www.dicore2s.com/
- Dico(re)²s develops and deploys a coupon-based discount campaign platform to provide consumers and retailers/manufacturers with a personalized environment for maximum customer satisfaction and business profitability.
- First-MM - Flexible Skill Acquisition and Intuitive Robot Tasking for Mobile Manipulation (EC FP7, Feb. 2010 - Jul. 2013)
- http://www.first-mm.eu/
- The goal of First-MM is to build the basis for a new generation of autonomous mobile manipulation robots that can flexibly be instructed to perform complex manipulation and transportation tasks. The project will develop a novel robot programming environment that allows even non-expert users to specify complex manipulation tasks in real-world environments. In addition to a task specification language, the environment includes concepts for probabilistic inference and for learning manipulation skills from demonstration and from experience.
Past projects
- RoboSoM - A Robotic Sense of Movement (EC FP7, Dec. 2009 - Dec. 2012)
- http://www.robosom.eu/
- This project aims at advancing the state-of-the-art in motion perception and control in a humanoid robot. The fundamental principles to explore are rooted on theories of human perception: Expected Perception (EP) and the Vestibular Unified Reference Frame.
- HANDLE - Developmental Pathway Towards Autonomy and Dexterity in Robot In-Hand Manipulation (EC FP7, Feb. 2009 - Feb. 2013)
- http://www.handle-project.eu
- This project aims at providing advanced perception and control capabilities to the Shadow Robot hand, one of the most advanced robotic hands in mechanical terms. We follow some paradigms of human learning to make the system able to grasp and manipulate objects of different characteristics: learning by imitation and by self-exploration. Different object characteristics and usages (object affordances) determine the way the hand will perform the grasping and manipulation actions.
- URUS - Ubiquitous Networking Robotics in Urban Settings (EC FP6, Dec. 2006 - Nov. 2009)
- RobotCub - Robotic Open-Architecture Technology for Cognition, Understanding and Behaviour (EC FP6, Sept. 2004 - Jan. 2010)
- CAVIAR - Context-Aware Vision Using Image-Based Active Recognition (EC FP6, 2002 - 2005)
- MIRROR - Mirror Neurons for Recognition (EC FP5, 2001 - 2004)
Material
Robots
Other resources
Blackhole network storage
You can store your work and backups on blackhole (10.0.3.118). As of 2013, this disk replaced the old one europa_hd (10.0.3.117).
Cortex cluster
For information on the setup of this cluster, see Cortex.
Cameras
- Nickon5000D photo camera
- Flea firewire camera
Demos
iCubBrain cluster
For information on the setup of this cluster, see iCubBrain.
Network
See the VisLab network article.
Software repositories
Git repositories at:
https://github.com/vislab-tecnico-lisboa
Github repositories guidelines:
- Repository name: All the characters must be lower case and use underscore to separate words
- Repository name: Avoid the usage of non-letter characters in the name, including "-"
- Repository description is mandatory
- README.md is mandatory
- The WIKI of the repository is highly encouraged to use, in case the README.md becomes a very large file
Old SVN repository at:
svn://svn.isr.ist.utl.pt/vislab
Tutorials
- 3D ball tracker
- OpenRAVE Tutorial
- ROS Tutorial
- Caltech Pedestrian Detection database and code
- Object Detection code by Felzenszwalb, Girshick, McAllester, Ramanan
- Using Git with a centralized workflow
Useful links
VisLab category
The page Category:Vislab (linked below) lists all pages related to VisLab.