ICub machines configuration: Difference between revisions

From ISRWiki
Jump to navigation Jump to search
m (→‎YARP and iCub: CREATE_GUIS -> YARP_COMPILE_GUIS, CREATE_LIB_MATH -> YARP_COMPILE_libYARP_MATH)
 
(25 intermediate revisions by the same user not shown)
Line 33: Line 33:
=== Desktop machines ===
=== Desktop machines ===


With the graphical Network Manager (https://help.ubuntu.com/14.04/ubuntu-help/net-fixed-ip-address.html), configure the connection "Auto eth0" IPv4 as follows:
With the graphical Network Manager (https://help.ubuntu.com/16.04/ubuntu-help/net-fixed-ip-address.html), configure the connection "Auto eth0" IPv4 as follows:
{| class="wikitable" border="1"
{| class="wikitable" border="1"
|+  
|+  
Line 68: Line 68:
== Dependencies ==
== Dependencies ==


Installing the '''icub-common''' metapackage is sufficient. It is a bundle of the following packages (for more details see [http://wiki.icub.org/wiki/Linux:Installation_from_sources here] and [http://wiki.icub.org/wiki/Linux:_dependencies here]):
Installing the '''icub-common''' metapackage is sufficient. It is a bundle of the following packages (for more details see [http://wiki.icub.org/wiki/Linux:Installation_from_sources here], [http://wiki.icub.org/wiki/Linux:_dependencies here] and [https://github.com/robotology/yarp/blob/master/.travis.yml here]):
  sudo apt-get install build-essential libace-dev libedit-dev libgsl0-dev libncurses5-dev gfortran libtinyxml-dev
  sudo apt install build-essential libace-dev libedit-dev libeigen3-dev libncurses5-dev gfortran libtinyxml-dev libgraphviz-dev
  sudo apt-get install git-core ssh gcc g++ make cmake-curses-gui freeglut3-dev libxmu-dev libswscale-dev libavformat-dev
  sudo apt install git-core ssh gcc g++ make cmake-curses-gui freeglut3-dev libxmu-dev libswscale-dev libavformat-dev
  sudo apt-get install qttools5-dev qtdeclarative5-dev qtdeclarative5-controls-plugin qtdeclarative5-dialogs-plugin qtmultimedia5-dev qtdeclarative5-qtmultimedia-plugin qtquick1-5-dev libqt5svg5 libqt5opengl5-dev
 
Qt5 graphics dependencies in Ubuntu 16.04 Xenial:
 
  sudo apt install qttools5-dev qtdeclarative5-dev qtdeclarative5-controls-plugin qtdeclarative5-dialogs-plugin qtmultimedia5-dev qtdeclarative5-qtmultimedia-plugin qtquick1-5-dev libqt5opengl5-dev
 
Qt5 graphics dependencies in Debian Stretch:
sudo apt install qml-module-qt-labs-folderlistmodel qml-module-qt-labs-settings


iCub Simulator dependencies: SDL and ODE.
iCub Simulator dependencies: SDL and ODE.


The GTK versions of graphical YARP programs will be discontinued in 2015 (replaced by Qt equivalents). If you still want to obtain the old programs during compilation, do:
GTK graphical programs are obsolete and replaced by their Qt equivalents. If you still want the old GTK programs, install <code>libgtkmm-2.4-dev</code>
 
sudo apt-get install libgtkmm-2.4-dev


== Environment variables ==
== Environment variables ==
Line 83: Line 87:
* Create a file called ~/.bashrc_iCub like this one. Usually you do not need all of the following variables and settings, just a subset.
* Create a file called ~/.bashrc_iCub like this one. Usually you do not need all of the following variables and settings, just a subset.


# /usr/local/src/robot directory can be mounted from NFS, or created manually with permissions:  sudo chown icub.icub /usr/local/src/robot -R
  export ROBOT_CODE=/usr/local/src/robot
export ROBOT_CODE=/usr/local/src/robot
 
export ICUBcontrib_DIR=$ROBOT_CODE/icub-contrib-common/build
  export YARP_ROOT=$ROBOT_CODE/yarp
export YARP_ROOT=$ROBOT_CODE/yarp
  export YARP_DIR=$YARP_ROOT/build
export YARP_DIR=$YARP_ROOT/build
  export ICUB_ROOT=$ROBOT_CODE/icub-main
export ICUB_ROOT=${ROBOT_CODE}/icub-main
  export ICUB_DIR=$ICUB_ROOT/build
export ICUB_DIR=${ICUB_ROOT}/build
 
export icub_firmware_shared_DIR=${ROBOT_CODE}/icub-firmware-shared/build
  export ICUBcontrib_DIR=$ROBOT_CODE/icub-contrib-common/build
export YARP_DATA_DIRS=${YARP_DIR}/share/yarp:${ICUB_DIR}/share/iCub:${ICUBcontrib_DIR}/share/ICUBcontrib
  export YARP_DATA_DIRS=$YARP_DIR/share/yarp:$ICUB_DIR/share/iCub:$ICUBcontrib_DIR/share/ICUBcontrib:$ROBOT_CODE/speech/svox-speech/build/share/speech
export FIRMWARE_BIN=${ROBOT_CODE}/icub-firmware/build
  export PATH=$PATH:$YARP_DIR/bin:$ICUB_DIR/bin:$ICUBcontrib_DIR/bin
export IPOPT_DIR=$ROBOT_CODE/Ipopt-3.11.9
 
export OpenCV_DIR=$ROBOT_CODE/opencv-2.4.9/build
  export ODE_DIR=$ROBOT_CODE/ode-0.13/build
#export OpenCV_DIR=$ROBOT_CODE/opencv-2.4.9/build-cuda # for opencv_gpu
 
  # OpenCV: select one of the following
  #export OpenCV_DIR=$ROBOT_CODE/opencv2/build
  #export OpenCV_DIR=$ROBOT_CODE/opencv2/build-cuda
  export OpenCV_DIR=$ROBOT_CODE/opencv3/build
  #export OpenCV_DIR=$ROBOT_CODE/opencv3/build-cuda


# if your modules rely on icub-contrib-common (such as POETICON++), set the following:
  # To enable tab completion on yarp port names
export ICUBcontrib_DIR=$code/icub-contrib-common/build
  if [ -f $YARP_ROOT/scripts/yarp_completion ]; then
export PATH=$PATH:$YARP_DIR/bin:$ICUB_DIR/bin:$ICUBcontrib_DIR/bin
    source $YARP_ROOT/scripts/yarp_completion
export YARP_DATA_DIRS=$YARP_DIR/share/yarp:$ICUB_DIR/share/iCub:$ICUBcontrib_DIR/share/ICUBcontrib
  fi


# himrep, IOL, stereo-vision, SiftGPU, Lua
  # Set the name of your robot here.
export SIFTGPU_DIR=~/SiftGPU    # note: SiftGPU installed outside NFS
  export YARP_ROBOT_NAME=iCubLisboa01
export LIBSVMLIN_DIR=${ROBOT_CODE}/himrep/liblinear-1.91
### DO NOT REMOVE ';;;' ###
export LUA_PATH=";;;${ROBOT_CODE}/rFSM/?.lua;${ICUBcontrib_DIR}/share/ICUBcontrib/contexts/interactiveObjectsLearning/LUA/?.lua"
export LUA_CPATH=";;;$YARP_ROOT/bindings/build-lua/?.so"


export PATH=$PATH:$ICUB_DIR/bin:$YARP_DIR/bin:$ICUBcontrib_DIR/bin
  # Set-up optimizations
export PATH=$PATH:${ROBOT_CODE}/rFSM/tools:${ICUBcontrib_DIR}/share/ICUBcontrib/contexts/interactiveObjectsLearning/LUA
  export CMAKE_BUILD_TYPE=Release
export PATH=$PATH:${YARP_ROOT}/bindings/build-lua
# CUDA
export PATH=$PATH:/usr/local/cuda-6.5/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-6.5/lib64


export YARP_ROBOT_NAME=iCubLisboa01
  # DebugStream customization
  export YARP_VERBOSE_OUTPUT=0
  export YARP_COLORED_OUTPUT=1
  export YARP_TRACE_ENABLE=0
  export YARP_FORWARD_LOG_ENABLE=0


# To enable tab completion on yarp port names
  # Lua
if [ -f $YARP_ROOT/scripts/yarp_completion ]; then
  ### DO NOT REMOVE ';;;' ###
  source $YARP_ROOT/scripts/yarp_completion
  export LUA_PATH=";;;$ROBOT_CODE/rFSM/?.lua;$ICUBcontrib_DIR/share/ICUBcontrib/contexts/interactiveObjectsLearning/LUA/?.lua"
fi
  export LUA_CPATH=";;;$YARP_ROOT/bindings/build-lua/?.so"
  export PATH=$PATH:$ROBOT_CODE/rFSM/tools:$ICUBcontrib_DIR/share/ICUBcontrib/contexts/interactiveObjectsLearning/LUA
  export PATH=$PATH:$YARP_ROOT/bindings/build-lua
  export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$YARP_ROOT/bindings/build-lua


* Then, before the following line of /etc/bash.bashrc
* Then, before the following line of /etc/bash.bashrc
Line 131: Line 139:
The reason why we use the above custom file (as opposed to the standard ~/.bashrc) is that we want to enforce the variables both during interactive and non-interactive sessions, such as commands launched via <code>yarprun</code> from another machine.
The reason why we use the above custom file (as opposed to the standard ~/.bashrc) is that we want to enforce the variables both during interactive and non-interactive sessions, such as commands launched via <code>yarprun</code> from another machine.


See also https://gitlab.robotology.eu/matteo.brunettini/icub-environment for the variables and scripts employed at IIT.
See also https://git.robotology.eu/mbrunettini/icub-environment for the variables and scripts employed at IIT.


= Additional software =
= Additional software =
Line 139: Line 147:
=== Ubuntu packages ===
=== Ubuntu packages ===


  sudo apt-get install libcv-dev libhighgui-dev libcvaux-dev libopencv-gpu-dev
* this is the easiest way to install OpenCV, however the Ubuntu packages might be too old and/or missing some specific features (in that case proceed with a manual installation, see below)


This is the easiest way to install OpenCV, however some machines may require a custom manual compilation instead (see below).
  sudo apt install libcv-dev libhighgui-dev libcvaux-dev libopencv-gpu-dev


=== Manual compilation ===
=== Manual compilation ===


* on most machines: download OpenCV 2.4.3 or higher, create a build directory, CMake, set WITH_CUDA=OFF, compile, set OpenCV_DIR to the path of OpenCV-x.y.z/build, for example:
* on most machines: download OpenCV, create a build directory, CMake, set WITH_CUDA=OFF, compile, set OpenCV_DIR to the path of OpenCV-x.y.z/build, for example:
  export OpenCV_DIR=$code/OpenCV-2.4.3/build
  export OpenCV_DIR=$code/OpenCV-x.y.z/build


* on CUDA machines, in order to compile CUDA-enabled modules: create a build-cuda directory, CMake, set WITH_CUDA=ON, compile, set OpenCV_DIR to the path of OpenCV-x.y.z/build-cuda
* on CUDA machines, in order to compile CUDA-enabled modules: create a build-cuda directory, CMake, set WITH_CUDA=ON, compile, set OpenCV_DIR to the path of OpenCV-x.y.z/build-cuda. You may need to disable <code>cudacodec</code>, see also https://github.com/opencv/opencv_contrib/issues/1786


== YARP and iCub ==
== YARP and iCub ==


If you work with the robot, use the volume shares exported from the NFS server.
* In general, when compiling this software, do '''not''' use <code>sudo make install</code> but simply <code>make</code> (and configure the PATH variable in such a way that it finds the binaries from the build directory).


In other cases:
* If you work with the robot, use the volume shares exported from the NFS server.


Follow the instructions on the [[iCub software]] article. When compiling, do '''not''' use <code>sudo make install</code> but simply <code>make</code> (we have configured the PATH variable to find the latest compiled binaries, and we do not want two copies of the same thing on the system).
* In other cases, follow the instructions on the [[iCub software]] article.  


* yarp CMake configuration
* yarp CMake configuration
  CMAKE_BUILD_TYPE Release
  CMAKE_BUILD_TYPE Release
  CREATE_GUIS
  YARP_COMPILE_GUIS
  CREATE_LIB_MATH
  YARP_COMPILE_libYARP_MATH


  // to enable 640x480@30Hz images with Bayer encoding
  // to enable 640x480@30Hz images with Bayer encoding
  // install libraw1394-dev libdc1394-22-dev then enable
  // install libraw1394-dev libdc1394-22-dev then enable
  CREATE_OPTIONAL_CARRIERS
  CREATE_OPTIONAL_CARRIERS
  ENABLE_yarpcar_bayer_carrier
  ENABLE_yarpcar_bayer ON
ENABLE_yarpcar_mjpeg ON


* icub-main CMake configuration
* icub-main CMake configuration
Line 185: Line 194:
=== Prerequisites ===
=== Prerequisites ===


  sudo apt-get install freeglut3-dev libdevil-dev libglew-dev
  sudo apt install freeglut3-dev libdevil-dev libglew-dev
  sudo apt-get purge libcudart*  // because we will manually install it
  sudo apt purge libcudart*  // because we will manually install it


Troubleshooting: http://askubuntu.com/questions/410604/installing-nvidia-drivers-with-pkg1-run-ends-with-no-version-h-found
Troubleshooting: http://askubuntu.com/questions/410604/installing-nvidia-drivers-with-pkg1-run-ends-with-no-version-h-found
Line 192: Line 201:
=== CUDA Toolkit, SDK and Examples ===
=== CUDA Toolkit, SDK and Examples ===


* stop X servers: <code>sudo service gdm stop</code> (or <code>lightdm stop</code> depending on configuration)
* stop X servers: <code>sudo service lightdm stop</code> (or <code>gdm stop</code> depending on configuration)
* download and install NVIDIA CUDA Toolkit from http://docs.nvidia.com/cuda (not from Ubuntu packages)
* download and install NVIDIA CUDA Toolkit from https://developer.nvidia.com/cuda-downloads
* preferably, use Installer Type: runfile (local). Alternatively, deb (local)
* if you obtain the error "Toolkit:  Installation Failed. Using unsupported Compiler.", use the override option, e.g., <code>./cuda_6.0.37_linux_64.run --override</code>
* if you obtain the error "Toolkit:  Installation Failed. Using unsupported Compiler.", use the override option, e.g., <code>./cuda_6.0.37_linux_64.run --override</code>
* if you obtain the error "The driver installation is unable to locate the kernel source. Please make sure that the kernel source packages are installed and set up correctly":
* if you obtain the error "The driver installation is unable to locate the kernel source. Please make sure that the kernel source packages are installed and set up correctly":
** read the CUDA log in /tmp and verify that the graphics card is currently supported -- if not, you might need to install a legacy NVIDIA driver. For example, the Quadro FX 580 card needs NVIDIA legacy drivers 340.xx: install them and then answer '''no''' when CUDA Toolkit installer asks "Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 346.46?"
** read the CUDA log in /tmp and verify that the graphics card is currently supported -- if not, you might need to install a legacy NVIDIA driver. For example, the Quadro FX 580 card needs NVIDIA legacy drivers 340.xx: install them and then answer '''no''' when CUDA Toolkit installer asks "Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 346.46?"
** <code>sudo apt-get install linux-generic linux-headers-$(uname -r) linux-headers-generic-lts-trusty</code> (or other Ubuntu version codename)
** <code>sudo apt install linux-generic linux-headers-$(uname -r) linux-headers-generic-lts-trusty</code> (or other Ubuntu version codename)
** call the installer specifying the kernel source path, e.g., <code>./cuda_7.0.28_linux.run --kernel-source-path=/usr/src/linux-headers-3.13.0-52-generic/</code>
** call the installer specifying the kernel source path, e.g., <code>./cuda_7.0.28_linux.run --kernel-source-path=/usr/src/linux-headers-3.13.0-52-generic/</code>
* output of successful installation:
* output of successful installation:
Line 219: Line 229:
* unzip it - typically in your home directory, not in the NFS shared folder
* unzip it - typically in your home directory, not in the NFS shared folder
* compile with <code>make</code>
* compile with <code>make</code>
* if you obtain "unspecified launch failure" and 0 sift features/matches, check that the X server is running: <code>sudo service gdm start</code> (or <code>lightdm start</code> depending on configuration)
* if you obtain "unspecified launch failure" and 0 sift features/matches, check that the X server is running: <code>sudo service lightdm start</code>
* if you obtain the error "/usr/local/cuda/bin/nvcc: Command not found", this can help: <code>sudo ln -s /usr/lib/nvidia-cuda-toolkit /usr/local/cuda</code>. See also http://askubuntu.com/questions/231503/nvcc-compiler-setup-ubuntu-12-04
* if you obtain the error "/usr/local/cuda/bin/nvcc: Command not found", this can help: <code>sudo ln -s /usr/lib/nvidia-cuda-toolkit /usr/local/cuda</code>. See also http://askubuntu.com/questions/231503/nvcc-compiler-setup-ubuntu-12-04
* run the test program <code>SimpleSIFT</code> - it should work via ssh, as well as in a local session
* run the test program <code>SimpleSIFT</code> - it should work via ssh, as well as in a local session
Line 242: Line 252:
  2247 sift matches were found;
  2247 sift matches were found;


* define <code>export SIFTGPU_DIR=~/SiftGPU</code> or similar in your iCub bashrc file, so that libsiftgpu.so is found by IOL modules
* define <code>export SIFTGPU_DIR=~/SiftGPU</code> or similar in your iCub bashrc file, so that libsiftgpu.so is found by GPU accelerated modules


== himrep ==
== himrep ==
Line 249: Line 259:


To use the deep neural network object recognition, based on Caffe, follow the instructions at README_Caffe.md.
To use the deep neural network object recognition, based on Caffe, follow the instructions at README_Caffe.md.
If you get "error: kernel launches from templates are not allowed in system files", use an older GCC version like 4.6 (see also https://github.com/BVLC/caffe/issues/337). If you get "ImportError: No module named 'yaml'", do <code>sudo apt-get install python-yaml</code>.
If you get "error: kernel launches from templates are not allowed in system files", use an older GCC version like 4.6 (see also https://github.com/BVLC/caffe/issues/337). If you get "ImportError: No module named 'yaml'", do <code>sudo apt install python-yaml</code>.


== IOL ==
== IOL ==


* <code>sudo apt-get install lua5.1 liblua5.1-dev</code>
* <code>sudo apt install lua5.1 liblua5.1-dev</code>
* clone [https://github.com/kmarkus/rFSM rFSM] (no need to compile anything here)
* clone [https://github.com/kmarkus/rFSM rFSM] (no need to compile anything here)


Line 268: Line 278:
* http://wiki.icub.org/wiki/Yarp-config
* http://wiki.icub.org/wiki/Yarp-config
* http://www.yarp.it/yarp-config.html
* http://www.yarp.it/yarp-config.html
* [https://docs.google.com/document/d/1S9m4DbrV1AJWo_E1r3F8k3ZzmhaTOWRJ9WFjXPFqVos IIT - How to setup and start the iCub from scratch]
* [https://docs.google.com/document/d/1S9m4DbrV1AJWo_E1r3F8k3ZzmhaTOWRJ9WFjXPFqVos IIT - How to setup and start the iCub from scratch] (last updated December 2016)


== XML files ==
== XML files ==
Line 284: Line 294:
If the robot is not complete (or some parts need to be disabled):
If the robot is not complete (or some parts need to be disabled):


* On the pc104, type <code>yarp-config robot --list</code> to look for the .ini files from where the configuration values are launched.
* In a pc104 shell, type <code>yarp-config robot --list</code> to look for the .ini files from where the configuration values are launched: these are local copies of the robots-configuration repository files
* Go to <code>INSTALLED DATA</code> directory path and then within the corresponding robot folder (eg. icubGenova04) and look for the file robotInterface.ini, which points to a .xml file which contains the configuration paths for all the robot parts (eg. icub_all.xml).
* Go to the <code>INSTALLED DATA</code> directory path and then, within the corresponding robot folder (e.g., iCubLisboa01), look for the file yarprobotinterface.ini, which points to a .xml file which contains the configuration paths for all the robot parts (e.g., icub_all.xml)
* If the local config file does not exist, there is only the canonical in the build path, then create a local one using yarp-config --import.
* If the local config file does not exist, there is only the canonical file in the build path: in that case, create a local one using <code>yarp-config --import</code>
* Copy the .xml file with a descriptive name (eg icub_no_legs.xml) and on the copied file, comment or remove all lines that refer to .ini files of the part(s) that have to be disabled.
* Make a copy of the .xml file giving it a descriptive name (e.g., icub_no_legs.xml) and, in the copied file, comment or remove all lines that refer to .ini files of the part(s) that you want to disable
* Change robotInterface.ini too, so that it will point to the new .xml file where the parts have been commented.
* Edit the contents of yarprobotinterface.ini so that it points to the new .xml file where the parts have been commented
* On the robot/icub Startup application, modify the way that gravityCompensator and wholeBodyDynamics are launched, so that they don’t look for leg configuration files. For the legs case, add --no_legs to the argument list.
* In the iCubStartup application GUI or xml, modify the way that gravityCompensator and wholeBodyDynamics are launched, so that they don't look for the configuration files of the parts that have been disabled: in the legs example, just add <code>--no_legs</code> to the argument list


= See also =
= See also =

Latest revision as of 16:06, 28 July 2019

In this page we describe the setup of the computers connected to the iCub robot, which all share a common network disk and configuration.

See iCub machines configuration/Archive for obsolete information.

Operating system installation

Ubuntu LTS, default settings and partitioning. The first user to be created must be called icub, to make the distributed setup possible: for NFS network mount, this user has to have uid 1000 and guid 1000.

In order to add a user:

  • either use the Ubuntu graphical frontends
  • or use a Terminal:
sudo adduser icub
sudo usermod -aG sudo icub  # gives sudo privileges

Other operations

Network configuration

See also: VisLab network, ISR computing resources.

Configure a static IP as explained in one of the following subsections, depending if the machine is a desktop or a server one.

Also, it is recommended to set up the /etc/hosts file as follows:

10.10.1.41 icubbrain1
10.10.1.42 icubbrain2
10.10.1.50 pc104
10.10.1.51 icub-cuda
10.10.1.53 icub-laptop

You should be able to do ping icubbrain1

Desktop machines

With the graphical Network Manager (https://help.ubuntu.com/16.04/ubuntu-help/net-fixed-ip-address.html), configure the connection "Auto eth0" IPv4 as follows:

Address Netmask Gateway DNS Servers notes
10.10.1.x 255.255.255.0 10.10.1.254 10.0.0.1, 10.0.0.2 visnet (iCub machines)
10.0.x.y 255.255.0.0 10.0.0.254 10.0.0.1, 10.0.0.2 isrnet (rest of ISR)

Servers

Edit /etc/network/interfaces like this:

auto lo
iface lo inet loopback
  
auto eth0
iface eth0 inet static
address 10.x.y.z # put your IP here, see above table
netmask 255.255.x.y # see above table
network 10.10.1.0
broadcast 10.10.1.255
gateway 10.10.1.254
dns-nameservers 10.0.0.1 10.0.0.2

In some versions of Ubuntu, to configure DNS you also need to edit /etc/resolvconf/resolv.conf.d/head like this:

nameserver 10.0.0.1
nameserver 10.0.0.2

then run:

sudo resolvconf -u

Dependencies

Installing the icub-common metapackage is sufficient. It is a bundle of the following packages (for more details see here, here and here):

sudo apt install build-essential libace-dev libedit-dev libeigen3-dev libncurses5-dev gfortran libtinyxml-dev libgraphviz-dev
sudo apt install git-core ssh gcc g++ make cmake-curses-gui freeglut3-dev libxmu-dev libswscale-dev libavformat-dev

Qt5 graphics dependencies in Ubuntu 16.04 Xenial:

sudo apt install qttools5-dev qtdeclarative5-dev qtdeclarative5-controls-plugin qtdeclarative5-dialogs-plugin qtmultimedia5-dev qtdeclarative5-qtmultimedia-plugin qtquick1-5-dev libqt5opengl5-dev

Qt5 graphics dependencies in Debian Stretch:

sudo apt install qml-module-qt-labs-folderlistmodel qml-module-qt-labs-settings

iCub Simulator dependencies: SDL and ODE.

GTK graphical programs are obsolete and replaced by their Qt equivalents. If you still want the old GTK programs, install libgtkmm-2.4-dev

Environment variables

  • Create a file called ~/.bashrc_iCub like this one. Usually you do not need all of the following variables and settings, just a subset.
 export ROBOT_CODE=/usr/local/src/robot
 export YARP_ROOT=$ROBOT_CODE/yarp
 export YARP_DIR=$YARP_ROOT/build
 export ICUB_ROOT=$ROBOT_CODE/icub-main
 export ICUB_DIR=$ICUB_ROOT/build
 export ICUBcontrib_DIR=$ROBOT_CODE/icub-contrib-common/build
 export YARP_DATA_DIRS=$YARP_DIR/share/yarp:$ICUB_DIR/share/iCub:$ICUBcontrib_DIR/share/ICUBcontrib:$ROBOT_CODE/speech/svox-speech/build/share/speech
 export PATH=$PATH:$YARP_DIR/bin:$ICUB_DIR/bin:$ICUBcontrib_DIR/bin
 export ODE_DIR=$ROBOT_CODE/ode-0.13/build
 # OpenCV: select one of the following
 #export OpenCV_DIR=$ROBOT_CODE/opencv2/build
 #export OpenCV_DIR=$ROBOT_CODE/opencv2/build-cuda
 export OpenCV_DIR=$ROBOT_CODE/opencv3/build
 #export OpenCV_DIR=$ROBOT_CODE/opencv3/build-cuda
 # To enable tab completion on yarp port names
 if [ -f $YARP_ROOT/scripts/yarp_completion ]; then
   source $YARP_ROOT/scripts/yarp_completion
 fi
 # Set the name of your robot here.
 export YARP_ROBOT_NAME=iCubLisboa01
 # Set-up optimizations
 export CMAKE_BUILD_TYPE=Release
 # DebugStream customization
 export YARP_VERBOSE_OUTPUT=0
 export YARP_COLORED_OUTPUT=1
 export YARP_TRACE_ENABLE=0
 export YARP_FORWARD_LOG_ENABLE=0
 # Lua
 ### DO NOT REMOVE ';;;' ###
 export LUA_PATH=";;;$ROBOT_CODE/rFSM/?.lua;$ICUBcontrib_DIR/share/ICUBcontrib/contexts/interactiveObjectsLearning/LUA/?.lua"
 export LUA_CPATH=";;;$YARP_ROOT/bindings/build-lua/?.so"
 export PATH=$PATH:$ROBOT_CODE/rFSM/tools:$ICUBcontrib_DIR/share/ICUBcontrib/contexts/interactiveObjectsLearning/LUA
 export PATH=$PATH:$YARP_ROOT/bindings/build-lua
 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$YARP_ROOT/bindings/build-lua
  • Then, before the following line of /etc/bash.bashrc
[ -z "$PS1" ] && return

add this:

# per-user environment variables (non-interactive and interactive mode)
source $HOME/.bashrc_iCub

The reason why we use the above custom file (as opposed to the standard ~/.bashrc) is that we want to enforce the variables both during interactive and non-interactive sessions, such as commands launched via yarprun from another machine.

See also https://git.robotology.eu/mbrunettini/icub-environment for the variables and scripts employed at IIT.

Additional software

OpenCV

Ubuntu packages

  • this is the easiest way to install OpenCV, however the Ubuntu packages might be too old and/or missing some specific features (in that case proceed with a manual installation, see below)
 sudo apt install libcv-dev libhighgui-dev libcvaux-dev libopencv-gpu-dev

Manual compilation

  • on most machines: download OpenCV, create a build directory, CMake, set WITH_CUDA=OFF, compile, set OpenCV_DIR to the path of OpenCV-x.y.z/build, for example:
export OpenCV_DIR=$code/OpenCV-x.y.z/build
  • on CUDA machines, in order to compile CUDA-enabled modules: create a build-cuda directory, CMake, set WITH_CUDA=ON, compile, set OpenCV_DIR to the path of OpenCV-x.y.z/build-cuda. You may need to disable cudacodec, see also https://github.com/opencv/opencv_contrib/issues/1786

YARP and iCub

  • In general, when compiling this software, do not use sudo make install but simply make (and configure the PATH variable in such a way that it finds the binaries from the build directory).
  • If you work with the robot, use the volume shares exported from the NFS server.
  • In other cases, follow the instructions on the iCub software article.
  • yarp CMake configuration
CMAKE_BUILD_TYPE Release
YARP_COMPILE_GUIS
YARP_COMPILE_libYARP_MATH
// to enable 640x480@30Hz images with Bayer encoding
// install libraw1394-dev libdc1394-22-dev then enable
CREATE_OPTIONAL_CARRIERS
ENABLE_yarpcar_bayer ON
ENABLE_yarpcar_mjpeg ON
  • icub-main CMake configuration
CMAKE_BUILD_TYPE Release
// on servers, do http://wiki.icub.org/wiki/Installing_IPOPT then enable
ENABLE_icubmod_cartesiancontrollerclient ON
ENABLE_icubmod_cartesiancontrollerserver ON
ENABLE_icubmod_gazecontrollerclient ON
  • final configuration
  1. yarp namespace /icub
  2. yarp conf 10.10.1.53 10000 (yarpserver runs on iCub laptop)
  • special machines such as pc104 need different flags

CUDA

Prerequisites

sudo apt install freeglut3-dev libdevil-dev libglew-dev
sudo apt purge libcudart*  // because we will manually install it

Troubleshooting: http://askubuntu.com/questions/410604/installing-nvidia-drivers-with-pkg1-run-ends-with-no-version-h-found

CUDA Toolkit, SDK and Examples

  • stop X servers: sudo service lightdm stop (or gdm stop depending on configuration)
  • download and install NVIDIA CUDA Toolkit from https://developer.nvidia.com/cuda-downloads
  • preferably, use Installer Type: runfile (local). Alternatively, deb (local)
  • if you obtain the error "Toolkit: Installation Failed. Using unsupported Compiler.", use the override option, e.g., ./cuda_6.0.37_linux_64.run --override
  • if you obtain the error "The driver installation is unable to locate the kernel source. Please make sure that the kernel source packages are installed and set up correctly":
    • read the CUDA log in /tmp and verify that the graphics card is currently supported -- if not, you might need to install a legacy NVIDIA driver. For example, the Quadro FX 580 card needs NVIDIA legacy drivers 340.xx: install them and then answer no when CUDA Toolkit installer asks "Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 346.46?"
    • sudo apt install linux-generic linux-headers-$(uname -r) linux-headers-generic-lts-trusty (or other Ubuntu version codename)
    • call the installer specifying the kernel source path, e.g., ./cuda_7.0.28_linux.run --kernel-source-path=/usr/src/linux-headers-3.13.0-52-generic/
  • output of successful installation:
Driver:   Installed
Toolkit:  Installed in /usr/local/cuda-7.0
Samples:  Not Selected
Please make sure that
-   PATH includes /usr/local/cuda-7.0/bin
-   LD_LIBRARY_PATH includes /usr/local/cuda-7.0/lib64, or, add /usr/local/cuda-7.0/lib64 to /etc/ld.so.conf and run ldconfig as root
To uninstall the CUDA Toolkit, run the uninstall script in /usr/local/cuda-7.0/bin
To uninstall the NVIDIA Driver, run nvidia-uninstall

SiftGPU

  • download it from http://cs.unc.edu/~ccwu/siftgpu/
  • unzip it - typically in your home directory, not in the NFS shared folder
  • compile with make
  • if you obtain "unspecified launch failure" and 0 sift features/matches, check that the X server is running: sudo service lightdm start
  • if you obtain the error "/usr/local/cuda/bin/nvcc: Command not found", this can help: sudo ln -s /usr/lib/nvidia-cuda-toolkit /usr/local/cuda. See also http://askubuntu.com/questions/231503/nvcc-compiler-setup-ubuntu-12-04
  • run the test program SimpleSIFT - it should work via ssh, as well as in a local session
  • example of successful execution:
$ ./SimpleSIFT 
Unable to create OpenGL Context!
For nVidia cards, you can try change to CUDA mode in this case
NOTE: changing maximum texture dimension to 32768
[SiftGPU Language]:	CUDA
Image size :	800x600
Image loaded :	../data/800-1.jpg
#Features:	3347
#Features MO:	3910
[RUN SIFT]:	0.339
Image size :	640x480
Image loaded :	../data/640-1.jpg
#Features:	2372
#Features MO:	2767
[RUN SIFT]:	0.208
NOTE: changing maximum texture dimension to 32768
[SiftMatchGPU]: CUDA
2247 sift matches were found;
  • define export SIFTGPU_DIR=~/SiftGPU or similar in your iCub bashrc file, so that libsiftgpu.so is found by GPU accelerated modules

himrep

Clone https://github.com/robotology/himrep, follow the instructions to compile liblinear, CMake, make install

To use the deep neural network object recognition, based on Caffe, follow the instructions at README_Caffe.md. If you get "error: kernel launches from templates are not allowed in system files", use an older GCC version like 4.6 (see also https://github.com/BVLC/caffe/issues/337). If you get "ImportError: No module named 'yaml'", do sudo apt install python-yaml.

IOL

  • sudo apt install lua5.1 liblua5.1-dev
  • clone rFSM (no need to compile anything here)

Clone https://github.com/robotology/iol, CMake, make install

stereo-vision

Clone https://github.com/robotology/stereo-vision, CMake with USE_SIFT_GPU=ON, make install

Best practices

Below are some tips and tricks taken from:

XML files

Edit XML files locally, in /home/icub/.local/share/yarp

To install robot-specific XML files, compile icub-main (just make) then use commands like yarp-config robot --import-all (installs all files) or yarp-config robot --import iCubLisboa01 affordancesExploration.xml (installs specific files)

ini files

To install application conf/*.ini files, compile a project (e.g., icub-main, poeticon, iol) then use commands like yarp-config context --import-all (installs all files) or yarp-config context --import actionsRenderingEngine (installs the files of specific applications)

Disabling some robot parts

If the robot is not complete (or some parts need to be disabled):

  • In a pc104 shell, type yarp-config robot --list to look for the .ini files from where the configuration values are launched: these are local copies of the robots-configuration repository files
  • Go to the INSTALLED DATA directory path and then, within the corresponding robot folder (e.g., iCubLisboa01), look for the file yarprobotinterface.ini, which points to a .xml file which contains the configuration paths for all the robot parts (e.g., icub_all.xml)
  • If the local config file does not exist, there is only the canonical file in the build path: in that case, create a local one using yarp-config --import
  • Make a copy of the .xml file giving it a descriptive name (e.g., icub_no_legs.xml) and, in the copied file, comment or remove all lines that refer to .ini files of the part(s) that you want to disable
  • Edit the contents of yarprobotinterface.ini so that it points to the new .xml file where the parts have been commented
  • In the iCubStartup application GUI or xml, modify the way that gravityCompensator and wholeBodyDynamics are launched, so that they don't look for the configuration files of the parts that have been disabled: in the legs example, just add --no_legs to the argument list

See also