Vislab: Difference between revisions

From ISRWiki
Jump to navigation Jump to search
 
(73 intermediate revisions by 6 users not shown)
Line 17: Line 17:
=== Current projects ===
=== Current projects ===


* First-MM - Flexible Skill Acquisition and Intuitive Robot Tasking for Mobile Manipulation (EC FP7, Feb. 2010 - ?)
* POETICON++ - Robots Need Language: A computational mechanism for generalisation and generation of new behaviours in robots (EC FP7, Jan. 2012 - Dec. 2015)
** http://www.poeticon.eu/
** The main objective of POETICON++ is the development of a computational mechanism for generalisation of motor programs and visual experiences for robots. To this end, it will integrate natural language and visual action/object recognition tools with motor skills and learning abilities, in the iCub humanoid. Tools and skills will engage in a cognitive dialogue for novel action generalisation and creativity experiments in two scenarios of "everyday activities", comprising of (a) behaviour generation through verbal instruction, and (b) visual scene understanding. POETICON++ views natural language as a necessary tool for endowing artificial agents with generalisation and creativity in real world environments.
 
* Dico(re)²s - Discount Coupon Recommendation and Redemption System (EC FP7, July 2011 - June 2013)
** http://www.dicore2s.com/
** Dico(re)²s develops and deploys a coupon-based discount campaign platform to provide consumers and retailers/manufacturers with a personalized environment for maximum customer satisfaction and business profitability.
 
* First-MM - Flexible Skill Acquisition and Intuitive Robot Tasking for Mobile Manipulation (EC FP7, Feb. 2010 - Jul. 2013)
** http://www.first-mm.eu/
** http://www.first-mm.eu/
* HANDLE - Developmental Pathway Towards Autonomy and Dexterity in Robot In-Hand Manipulation (EC FP7, Feb. 2009 - Feb. 2013)
** The goal of First-MM is to build the basis for a new generation of autonomous mobile manipulation robots that can flexibly be instructed to perform complex manipulation and transportation tasks. The project will develop a novel robot programming environment that allows even non-expert users to specify complex manipulation tasks in real-world environments. In addition to a task specification language, the environment includes concepts for probabilistic inference and for learning manipulation skills from demonstration and from experience.  
** http://www.handle-project.eu
 
** This project aims at providing advanced perception and control capabilities to the Shadow Robot hand, one of the most advanced robotic hands in mechanical terms. We follow some paradigms of human learning to make the system able to grasp and manipulate objects of different characteristics: learning by imitation and by self-exploration. Different object characteristics and usages (object affordances) determine the way the hand will perform the grasping and manipulation actions.  
=== Past projects ===
 
* RoboSoM - A Robotic Sense of Movement (EC FP7, Dec. 2009 - Dec. 2012)
* RoboSoM - A Robotic Sense of Movement (EC FP7, Dec. 2009 - Dec. 2012)
** http://www.robosom.eu/
** http://www.robosom.eu/
** This project aims at advancing the state-of-the-art in motion perception and control in a humanoid robot. The fundamental principles to explore are rooted on theories of human perception: Expected Perception (EP) and the Vestibular Unified Reference Frame.
** This project aims at advancing the state-of-the-art in motion perception and control in a humanoid robot. The fundamental principles to explore are rooted on theories of human perception: Expected Perception (EP) and the Vestibular Unified Reference Frame.


=== Past projects ===
* HANDLE - Developmental Pathway Towards Autonomy and Dexterity in Robot In-Hand Manipulation (EC FP7, Feb. 2009 - Feb. 2013)
** http://www.handle-project.eu
** This project aims at providing advanced perception and control capabilities to the Shadow Robot hand, one of the most advanced robotic hands in mechanical terms. We follow some paradigms of human learning to make the system able to grasp and manipulate objects of different characteristics: learning by imitation and by self-exploration. Different object characteristics and usages (object affordances) determine the way the hand will perform the grasping and manipulation actions.


* RobotCub - Robotic Open-Architecture Technology for Cognition, Understanding and Behaviour (EC FP6, Sept. 2004 - Jan. 2010)
** http://www.robotcub.org
* URUS - Ubiquitous Networking Robotics in Urban Settings (EC FP6, Dec. 2006 - Nov. 2009)
* URUS - Ubiquitous Networking Robotics in Urban Settings (EC FP6, Dec. 2006 - Nov. 2009)
** http://urus.upc.es
** http://urus.upc.es
* CAVIAR - Context-Aware Vision Using Image-Based Active Recognition (EC FP6, 2002 - 2005)
* MIRROR - Mirror Neurons for Recognition (EC FP5, 2001 - 2004)


== People ==
* RobotCub - Robotic Open-Architecture Technology for Cognition, Understanding and Behaviour (EC FP6, Sept. 2004 - Jan. 2010)
** http://www.robotcub.org


=== Permanent staff ===
* CAVIAR - Context-Aware Vision Using Image-Based Active Recognition (EC FP6, 2002 - 2005)
** http://homepages.inf.ed.ac.uk/rbf/CAVIAR/


* [http://users.isr.ist.utl.pt/~jasv/ José Santos-Victor]
* MIRROR - Mirror Neurons for Recognition (EC FP5, 2001 - 2004)
* [http://users.isr.ist.utl.pt/~alex Alexandre Bernardino]
* [http://users.isr.ist.utl.pt/~jpc/ João Paulo Costeira]
* [http://welcome.isr.ist.utl.pt/people/index.asp?accao=showpeople&id_people=2 João Sentieiro]
* [http://users.isr.ist.utl.pt/~jag/ José Gaspar]
 
=== Researchers ===
 
* [http://webdiis.unizar.es/~montesan/ Luis Montesano]
* [http://www.isr.ist.utl.pt/~macl Manuel Lopes]
* [http://users.isr.ist.utl.pt/~plinio/ Plinio Moreno]
* [http://users.isr.ist.utl.pt/~ricardo/ Ricardo Ferreira]
* [http://users.isr.ist.utl.pt/~rmcantin Rubén Martínez-Cantín]
 
=== PhD students ===
 
* Bruno Damas
* Dario Figueira
* [http://users.isr.ist.utl.pt/~gsaponaro/ Giovanni Saponaro]
* Jonas Hörnstein
* Jonas Ruesch
* [http://users.isr.ist.utl.pt/~mtaiana/ Matteo Taiana]
* Mauricio Arias
* Nuno Moutinho
* [http://welcome.isr.ist.utl.pt/people/index.asp?accao=showpeople&id_people=105 Pedro Ribeiro]
* Ravin DeSouza ([http://www.ist.utl.pt/en/about-IST/global-cooperation/IST-EPFL/ IST-EPFL])
* Sébastien Gay ([http://www.ist.utl.pt/en/about-IST/global-cooperation/IST-EPFL/ IST-EPFL])
* Urbain Prieur (IST-UPMC)
 
=== Master's students ===
 
* Ashish Jain
* Duarte Aragão
* João Pimentel
* Lester García
* Martim Brandão
* Tayebeh Razmi
 
=== Other staff ===
 
* Ana Santos (administrative)
* Ricardo Nunes (engineering)


=== Past members, visitors and miscellaneous ===
== Material ==


* Anastácia Rodrigues (administrative)
* [[VisLab book wishlist]]
* António Bastos
* [[VisLab calendar]]
* [http://roboticslab.uc3m.es/roboticslab/persona.php?id_pers=42 Carla González], Universidad Carlos III de Madrid, Spain
* [[Vislab list of journals]]
* Carlo Favali, visiting PhD student, University of Genoa
* [[VisLab slides template and logos]]
* Carlos Carreira, University of Valencia, Spain
* César Silva, Ph.D. (2001)
* Christian Wressnegger
* Claudia Deccó, visiting PhD student, University of São Paulo, Brazil
* Cláudia Soares
* Daniel Matter, IAESTE intern
* Diego Ortin Trasobares, visiting PhD student, University of Zaragoza, Spain
* Etienne Grossmann
* Eval Bacca Cortes, visiting MSc student, University of Cali, Colombia
* Freek Stulp, visiting researcher, University of Groningen, The Netherlands and University of Edinburgh, UK
* [http://www.speech.kth.se/~giampi/ Giampiero Salvi], Royal Institute of Technology, Sweden
* Gianluca De Leo, University of Genoa, Italy
* Ivana Cingovksa, IAESTE intern
* [http://webdiis.unizar.es/~jminguez/ Javier Minguez], visiting PhD student, University of Zaragoza, Spain
* [http://users.isr.ist.utl.pt/~maciel/ João Maciel], Ph.D. (2002)
* José Eduardo Vianna, Universidade Federal do Espírito Santo, Brazil
* Kristijan Petkov, summer 2010 IAESTE intern
* Lenildo Silva, visiting PhD student, Universidade Federal do Rio de Janeiro, Brazil
* Luís Jordão
* Luís Vargas
* Marco Zucchelli, visiting PhD student, Royal Institute of Technology, Sweden
* Maria Nazarycheva
* Matteo Perrone, University of Genoa, Italy
* Miguel Praça
* Mohit Khurana, summer 2008 intern
* Niall Winters, visiting PhD student, Trinity College, Dublin, Ireland
* [http://nicolagreggio.altervista.org/ Nicola Greggio], Sant'Anna School of Advanced Studies, Pisa, Italy
* Nuno Gracias
* Nuno Pinho
* [http://www.gatv.ssr.upm.es/index.php/en/component/content/article/62 Nuria Sánchez], Universidad Politécnica de Madrid, Spain
* Raquel Vassallo, visiting PhD student, Universidade Federal do Espírito Santo, Brazil
* Ricardo Beira
* [http://users.isr.ist.utl.pt/~rco/ Ricardo Oliveira]
* Roberto Iannello, University of Genoa, Italy
* Roger Castro Freitas, visiting PhD student, Universidade Federal do Espírito Santo, Brazil
* Sandra Nope Rodríguez
* Sjoerd van der Zwaan, M.Sc. (2001)
* Vicente Javier Traver
* Verica Krunić
* Vítor Costa, M.Sc. (1999)


== Robots ==
== Robots ==
Line 134: Line 62:
* [[Chico]] (iCubLisboa01)
* [[Chico]] (iCubLisboa01)
* [[Chica]]
* [[Chica]]
* [[Chico head]]
* [[Vizzy]]
* [[Vizzy]]
* [[Nao]]
* [[Darwin]]


== Other resources ==
== Other resources ==
=== Blackhole network storage ===
You can store your work and backups on blackhole (10.0.3.118). As of 2013, this disk replaced the old one europa_hd (10.0.3.117).


=== Cortex cluster ===
=== Cortex cluster ===


For information on the setup of this cluster, see [[Cortex]].
''For information on the setup of this cluster, see [[Cortex]].''
 
=== Cameras ===
* [[Nickon5000D]] photo camera
* [[Flea]] firewire camera


=== Demos ===
=== Demos ===
Line 150: Line 89:
=== iCubBrain cluster ===
=== iCubBrain cluster ===


For information on the setup of this cluster, see [[iCubBrain server configuration]].
''For information on the setup of this cluster, see [[iCubBrain]].''


=== Network ===
=== Network ===


{| border="1"
''See the [[VisLab network]] article.''
|+ IP address list
 
! name !! IP address
=== Software repositories ===
|-
 
|pc104 || 10.10.1.50
Git repositories at:
|-
  https://github.com/vislab-tecnico-lisboa
|icubsrv || 10.10.1.51
 
|-
Github repositories guidelines:
|chico2 || 10.10.1.52
|-
|cortex1 || 10.10.1.1
|-
|cortex2 || 10.10.1.2
|-
|cortex3 || 10.10.1.3
|-
|cortex4 || 10.10.1.4
|-
|cortex5 || 10.10.1.5
|-
|}


=== SVN repository ===
* Repository name: All the characters must be lower case and use underscore to separate words
* Repository name: Avoid the usage of non-letter characters in the name, including "-"
* Repository description is mandatory
* README.md is mandatory
* The WIKI of the repository is highly encouraged to use, in case the README.md becomes a very large file


You can find the VisLab SVN repository at:
Old SVN repository at:
   svn://svn.isr.ist.utl.pt/vislab
   svn://svn.isr.ist.utl.pt/vislab


Line 184: Line 114:


* [[3D ball tracker]]
* [[3D ball tracker]]
* [[OpenRAVE Tutorial]]
* [[ROS Tutorial]]
* [[DollarPedestrianDetectionCode | Caltech Pedestrian Detection database and code]]
* [[FelzenszwalbDetectionCode | Object Detection code by Felzenszwalb, Girshick, McAllester, Ramanan]]
* [[GitCentralizedWorkflow | Using Git with a centralized workflow]]


=== Useful links ===
=== Useful links ===


* [[Checklist for new VisLab members]]
* [[Checklist for new VisLab members]]
* YARP for iCub: [[RobotCub coding basics]]
* iCub Joints: http://eris.liralab.it/wiki/ICub_joints
* iKinArmCtrl: http://eris.liralab.it/iCub/dox/html/group__iKinArmCtrl.html


== VisLab category ==
== VisLab category ==

Latest revision as of 23:10, 4 October 2017

Institutional information: http://vislab.isr.ist.utl.pt

Our YouTube channel, containing nice videos and demonstrations: http://www.youtube.com/user/VislabLisboa

Our internal video page on this wiki: VisLab Videos

Research Topics

  • Machine Learning
  • Computer Vision

Projects

Current projects

  • POETICON++ - Robots Need Language: A computational mechanism for generalisation and generation of new behaviours in robots (EC FP7, Jan. 2012 - Dec. 2015)
    • http://www.poeticon.eu/
    • The main objective of POETICON++ is the development of a computational mechanism for generalisation of motor programs and visual experiences for robots. To this end, it will integrate natural language and visual action/object recognition tools with motor skills and learning abilities, in the iCub humanoid. Tools and skills will engage in a cognitive dialogue for novel action generalisation and creativity experiments in two scenarios of "everyday activities", comprising of (a) behaviour generation through verbal instruction, and (b) visual scene understanding. POETICON++ views natural language as a necessary tool for endowing artificial agents with generalisation and creativity in real world environments.
  • Dico(re)²s - Discount Coupon Recommendation and Redemption System (EC FP7, July 2011 - June 2013)
    • http://www.dicore2s.com/
    • Dico(re)²s develops and deploys a coupon-based discount campaign platform to provide consumers and retailers/manufacturers with a personalized environment for maximum customer satisfaction and business profitability.
  • First-MM - Flexible Skill Acquisition and Intuitive Robot Tasking for Mobile Manipulation (EC FP7, Feb. 2010 - Jul. 2013)
    • http://www.first-mm.eu/
    • The goal of First-MM is to build the basis for a new generation of autonomous mobile manipulation robots that can flexibly be instructed to perform complex manipulation and transportation tasks. The project will develop a novel robot programming environment that allows even non-expert users to specify complex manipulation tasks in real-world environments. In addition to a task specification language, the environment includes concepts for probabilistic inference and for learning manipulation skills from demonstration and from experience.

Past projects

  • RoboSoM - A Robotic Sense of Movement (EC FP7, Dec. 2009 - Dec. 2012)
    • http://www.robosom.eu/
    • This project aims at advancing the state-of-the-art in motion perception and control in a humanoid robot. The fundamental principles to explore are rooted on theories of human perception: Expected Perception (EP) and the Vestibular Unified Reference Frame.
  • HANDLE - Developmental Pathway Towards Autonomy and Dexterity in Robot In-Hand Manipulation (EC FP7, Feb. 2009 - Feb. 2013)
    • http://www.handle-project.eu
    • This project aims at providing advanced perception and control capabilities to the Shadow Robot hand, one of the most advanced robotic hands in mechanical terms. We follow some paradigms of human learning to make the system able to grasp and manipulate objects of different characteristics: learning by imitation and by self-exploration. Different object characteristics and usages (object affordances) determine the way the hand will perform the grasping and manipulation actions.
  • URUS - Ubiquitous Networking Robotics in Urban Settings (EC FP6, Dec. 2006 - Nov. 2009)
  • RobotCub - Robotic Open-Architecture Technology for Cognition, Understanding and Behaviour (EC FP6, Sept. 2004 - Jan. 2010)
  • MIRROR - Mirror Neurons for Recognition (EC FP5, 2001 - 2004)

Material

Robots

Other resources

Blackhole network storage

You can store your work and backups on blackhole (10.0.3.118). As of 2013, this disk replaced the old one europa_hd (10.0.3.117).

Cortex cluster

For information on the setup of this cluster, see Cortex.

Cameras

Demos

iCubBrain cluster

For information on the setup of this cluster, see iCubBrain.

Network

See the VisLab network article.

Software repositories

Git repositories at:

 https://github.com/vislab-tecnico-lisboa

Github repositories guidelines:

  • Repository name: All the characters must be lower case and use underscore to separate words
  • Repository name: Avoid the usage of non-letter characters in the name, including "-"
  • Repository description is mandatory
  • README.md is mandatory
  • The WIKI of the repository is highly encouraged to use, in case the README.md becomes a very large file

Old SVN repository at:

 svn://svn.isr.ist.utl.pt/vislab

Tutorials

Useful links

VisLab category

The page Category:Vislab (linked below) lists all pages related to VisLab.