ICub machines configuration: Difference between revisions
m (→OpenCV: cleanup) |
m (→YARP and iCub: CREATE_GUIS -> YARP_COMPILE_GUIS, CREATE_LIB_MATH -> YARP_COMPILE_libYARP_MATH) |
||
(23 intermediate revisions by the same user not shown) | |||
Line 33: | Line 33: | ||
=== Desktop machines === | === Desktop machines === | ||
With the graphical Network Manager (https://help.ubuntu.com/ | With the graphical Network Manager (https://help.ubuntu.com/16.04/ubuntu-help/net-fixed-ip-address.html), configure the connection "Auto eth0" IPv4 as follows: | ||
{| class="wikitable" border="1" | {| class="wikitable" border="1" | ||
|+ | |+ | ||
Line 68: | Line 68: | ||
== Dependencies == | == Dependencies == | ||
Installing the '''icub-common''' metapackage is sufficient. It is a bundle of the following packages (for more details see [http://wiki.icub.org/wiki/Linux:Installation_from_sources here] | Installing the '''icub-common''' metapackage is sufficient. It is a bundle of the following packages (for more details see [http://wiki.icub.org/wiki/Linux:Installation_from_sources here], [http://wiki.icub.org/wiki/Linux:_dependencies here] and [https://github.com/robotology/yarp/blob/master/.travis.yml here]): | ||
sudo apt | sudo apt install build-essential libace-dev libedit-dev libeigen3-dev libncurses5-dev gfortran libtinyxml-dev libgraphviz-dev | ||
sudo apt | sudo apt install git-core ssh gcc g++ make cmake-curses-gui freeglut3-dev libxmu-dev libswscale-dev libavformat-dev | ||
sudo apt | |||
Qt5 graphics dependencies in Ubuntu 16.04 Xenial: | |||
sudo apt install qttools5-dev qtdeclarative5-dev qtdeclarative5-controls-plugin qtdeclarative5-dialogs-plugin qtmultimedia5-dev qtdeclarative5-qtmultimedia-plugin qtquick1-5-dev libqt5opengl5-dev | |||
Qt5 graphics dependencies in Debian Stretch: | |||
sudo apt install qml-module-qt-labs-folderlistmodel qml-module-qt-labs-settings | |||
iCub Simulator dependencies: SDL and ODE. | iCub Simulator dependencies: SDL and ODE. | ||
Line 81: | Line 87: | ||
* Create a file called ~/.bashrc_iCub like this one. Usually you do not need all of the following variables and settings, just a subset. | * Create a file called ~/.bashrc_iCub like this one. Usually you do not need all of the following variables and settings, just a subset. | ||
export ROBOT_CODE=/usr/local/src/robot | |||
export YARP_ROOT=$ROBOT_CODE/yarp | |||
export YARP_DIR=$YARP_ROOT/build | |||
export ICUB_ROOT=$ROBOT_CODE/icub-main | |||
export ICUB_DIR=$ICUB_ROOT/build | |||
export ICUBcontrib_DIR=$ROBOT_CODE/icub-contrib-common/build | |||
export YARP_DATA_DIRS=$YARP_DIR/share/yarp:$ICUB_DIR/share/iCub:$ICUBcontrib_DIR/share/ICUBcontrib:$ROBOT_CODE/speech/svox-speech/build/share/speech | |||
export PATH=$PATH:$YARP_DIR/bin:$ICUB_DIR/bin:$ICUBcontrib_DIR/bin | |||
export ODE_DIR=$ROBOT_CODE/ode-0.13/build | |||
# OpenCV: select one of the following | |||
#export OpenCV_DIR=$ROBOT_CODE/opencv2/build | |||
#export OpenCV_DIR=$ROBOT_CODE/opencv2/build-cuda | |||
export OpenCV_DIR=$ROBOT_CODE/opencv3/build | |||
#export OpenCV_DIR=$ROBOT_CODE/opencv3/build-cuda | |||
# To enable tab completion on yarp port names | |||
if [ -f $YARP_ROOT/scripts/yarp_completion ]; then | |||
source $YARP_ROOT/scripts/yarp_completion | |||
fi | |||
# Set the name of your robot here. | |||
export YARP_ROBOT_NAME=iCubLisboa01 | |||
# Set-up optimizations | |||
export CMAKE_BUILD_TYPE=Release | |||
# DebugStream customization | |||
export YARP_VERBOSE_OUTPUT=0 | |||
export YARP_COLORED_OUTPUT=1 | |||
export YARP_TRACE_ENABLE=0 | |||
export YARP_FORWARD_LOG_ENABLE=0 | |||
# Lua | |||
### DO NOT REMOVE ';;;' ### | |||
export LUA_PATH=";;;$ROBOT_CODE/rFSM/?.lua;$ICUBcontrib_DIR/share/ICUBcontrib/contexts/interactiveObjectsLearning/LUA/?.lua" | |||
export LUA_CPATH=";;;$YARP_ROOT/bindings/build-lua/?.so" | |||
export PATH=$PATH:$ROBOT_CODE/rFSM/tools:$ICUBcontrib_DIR/share/ICUBcontrib/contexts/interactiveObjectsLearning/LUA | |||
export PATH=$PATH:$YARP_ROOT/bindings/build-lua | |||
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$YARP_ROOT/bindings/build-lua | |||
* Then, before the following line of /etc/bash.bashrc | * Then, before the following line of /etc/bash.bashrc | ||
Line 129: | Line 139: | ||
The reason why we use the above custom file (as opposed to the standard ~/.bashrc) is that we want to enforce the variables both during interactive and non-interactive sessions, such as commands launched via <code>yarprun</code> from another machine. | The reason why we use the above custom file (as opposed to the standard ~/.bashrc) is that we want to enforce the variables both during interactive and non-interactive sessions, such as commands launched via <code>yarprun</code> from another machine. | ||
See also https:// | See also https://git.robotology.eu/mbrunettini/icub-environment for the variables and scripts employed at IIT. | ||
= Additional software = | = Additional software = | ||
Line 139: | Line 149: | ||
* this is the easiest way to install OpenCV, however the Ubuntu packages might be too old and/or missing some specific features (in that case proceed with a manual installation, see below) | * this is the easiest way to install OpenCV, however the Ubuntu packages might be too old and/or missing some specific features (in that case proceed with a manual installation, see below) | ||
sudo apt | sudo apt install libcv-dev libhighgui-dev libcvaux-dev libopencv-gpu-dev | ||
=== Manual compilation === | === Manual compilation === | ||
Line 146: | Line 156: | ||
export OpenCV_DIR=$code/OpenCV-x.y.z/build | export OpenCV_DIR=$code/OpenCV-x.y.z/build | ||
* on CUDA machines, in order to compile CUDA-enabled modules: create a build-cuda directory, CMake, set WITH_CUDA=ON, compile, set OpenCV_DIR to the path of OpenCV-x.y.z/build-cuda | * on CUDA machines, in order to compile CUDA-enabled modules: create a build-cuda directory, CMake, set WITH_CUDA=ON, compile, set OpenCV_DIR to the path of OpenCV-x.y.z/build-cuda. You may need to disable <code>cudacodec</code>, see also https://github.com/opencv/opencv_contrib/issues/1786 | ||
== YARP and iCub == | == YARP and iCub == | ||
* In general, when compiling this software, do '''not''' use <code>sudo make install</code> but simply <code>make</code> (and configure the PATH variable in such a way that it finds the binaries from the build directory). | |||
* If you work with the robot, use the volume shares exported from the NFS server. | |||
* In other cases, follow the instructions on the [[iCub software]] article. | |||
* yarp CMake configuration | * yarp CMake configuration | ||
CMAKE_BUILD_TYPE Release | CMAKE_BUILD_TYPE Release | ||
YARP_COMPILE_GUIS | |||
YARP_COMPILE_libYARP_MATH | |||
// to enable 640x480@30Hz images with Bayer encoding | // to enable 640x480@30Hz images with Bayer encoding | ||
// install libraw1394-dev libdc1394-22-dev then enable | // install libraw1394-dev libdc1394-22-dev then enable | ||
CREATE_OPTIONAL_CARRIERS | CREATE_OPTIONAL_CARRIERS | ||
ENABLE_yarpcar_bayer ON | |||
ENABLE_yarpcar_mjpeg ON | |||
* icub-main CMake configuration | * icub-main CMake configuration | ||
Line 183: | Line 194: | ||
=== Prerequisites === | === Prerequisites === | ||
sudo apt | sudo apt install freeglut3-dev libdevil-dev libglew-dev | ||
sudo apt | sudo apt purge libcudart* // because we will manually install it | ||
Troubleshooting: http://askubuntu.com/questions/410604/installing-nvidia-drivers-with-pkg1-run-ends-with-no-version-h-found | Troubleshooting: http://askubuntu.com/questions/410604/installing-nvidia-drivers-with-pkg1-run-ends-with-no-version-h-found | ||
Line 190: | Line 201: | ||
=== CUDA Toolkit, SDK and Examples === | === CUDA Toolkit, SDK and Examples === | ||
* stop X servers: <code>sudo service | * stop X servers: <code>sudo service lightdm stop</code> (or <code>gdm stop</code> depending on configuration) | ||
* download and install NVIDIA CUDA Toolkit from | * download and install NVIDIA CUDA Toolkit from https://developer.nvidia.com/cuda-downloads | ||
* preferably, use Installer Type: runfile (local). Alternatively, deb (local) | |||
* if you obtain the error "Toolkit: Installation Failed. Using unsupported Compiler.", use the override option, e.g., <code>./cuda_6.0.37_linux_64.run --override</code> | * if you obtain the error "Toolkit: Installation Failed. Using unsupported Compiler.", use the override option, e.g., <code>./cuda_6.0.37_linux_64.run --override</code> | ||
* if you obtain the error "The driver installation is unable to locate the kernel source. Please make sure that the kernel source packages are installed and set up correctly": | * if you obtain the error "The driver installation is unable to locate the kernel source. Please make sure that the kernel source packages are installed and set up correctly": | ||
** read the CUDA log in /tmp and verify that the graphics card is currently supported -- if not, you might need to install a legacy NVIDIA driver. For example, the Quadro FX 580 card needs NVIDIA legacy drivers 340.xx: install them and then answer '''no''' when CUDA Toolkit installer asks "Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 346.46?" | ** read the CUDA log in /tmp and verify that the graphics card is currently supported -- if not, you might need to install a legacy NVIDIA driver. For example, the Quadro FX 580 card needs NVIDIA legacy drivers 340.xx: install them and then answer '''no''' when CUDA Toolkit installer asks "Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 346.46?" | ||
** <code>sudo apt | ** <code>sudo apt install linux-generic linux-headers-$(uname -r) linux-headers-generic-lts-trusty</code> (or other Ubuntu version codename) | ||
** call the installer specifying the kernel source path, e.g., <code>./cuda_7.0.28_linux.run --kernel-source-path=/usr/src/linux-headers-3.13.0-52-generic/</code> | ** call the installer specifying the kernel source path, e.g., <code>./cuda_7.0.28_linux.run --kernel-source-path=/usr/src/linux-headers-3.13.0-52-generic/</code> | ||
* output of successful installation: | * output of successful installation: | ||
Line 217: | Line 229: | ||
* unzip it - typically in your home directory, not in the NFS shared folder | * unzip it - typically in your home directory, not in the NFS shared folder | ||
* compile with <code>make</code> | * compile with <code>make</code> | ||
* if you obtain "unspecified launch failure" and 0 sift features/matches, check that the X server is running: <code>sudo service | * if you obtain "unspecified launch failure" and 0 sift features/matches, check that the X server is running: <code>sudo service lightdm start</code> | ||
* if you obtain the error "/usr/local/cuda/bin/nvcc: Command not found", this can help: <code>sudo ln -s /usr/lib/nvidia-cuda-toolkit /usr/local/cuda</code>. See also http://askubuntu.com/questions/231503/nvcc-compiler-setup-ubuntu-12-04 | * if you obtain the error "/usr/local/cuda/bin/nvcc: Command not found", this can help: <code>sudo ln -s /usr/lib/nvidia-cuda-toolkit /usr/local/cuda</code>. See also http://askubuntu.com/questions/231503/nvcc-compiler-setup-ubuntu-12-04 | ||
* run the test program <code>SimpleSIFT</code> - it should work via ssh, as well as in a local session | * run the test program <code>SimpleSIFT</code> - it should work via ssh, as well as in a local session | ||
Line 240: | Line 252: | ||
2247 sift matches were found; | 2247 sift matches were found; | ||
* define <code>export SIFTGPU_DIR=~/SiftGPU</code> or similar in your iCub bashrc file, so that libsiftgpu.so is found by | * define <code>export SIFTGPU_DIR=~/SiftGPU</code> or similar in your iCub bashrc file, so that libsiftgpu.so is found by GPU accelerated modules | ||
== himrep == | == himrep == | ||
Line 247: | Line 259: | ||
To use the deep neural network object recognition, based on Caffe, follow the instructions at README_Caffe.md. | To use the deep neural network object recognition, based on Caffe, follow the instructions at README_Caffe.md. | ||
If you get "error: kernel launches from templates are not allowed in system files", use an older GCC version like 4.6 (see also https://github.com/BVLC/caffe/issues/337). If you get "ImportError: No module named 'yaml'", do <code>sudo apt | If you get "error: kernel launches from templates are not allowed in system files", use an older GCC version like 4.6 (see also https://github.com/BVLC/caffe/issues/337). If you get "ImportError: No module named 'yaml'", do <code>sudo apt install python-yaml</code>. | ||
== IOL == | == IOL == | ||
* <code>sudo apt | * <code>sudo apt install lua5.1 liblua5.1-dev</code> | ||
* clone [https://github.com/kmarkus/rFSM rFSM] (no need to compile anything here) | * clone [https://github.com/kmarkus/rFSM rFSM] (no need to compile anything here) | ||
Line 266: | Line 278: | ||
* http://wiki.icub.org/wiki/Yarp-config | * http://wiki.icub.org/wiki/Yarp-config | ||
* http://www.yarp.it/yarp-config.html | * http://www.yarp.it/yarp-config.html | ||
* [https://docs.google.com/document/d/1S9m4DbrV1AJWo_E1r3F8k3ZzmhaTOWRJ9WFjXPFqVos IIT - How to setup and start the iCub from scratch] | * [https://docs.google.com/document/d/1S9m4DbrV1AJWo_E1r3F8k3ZzmhaTOWRJ9WFjXPFqVos IIT - How to setup and start the iCub from scratch] (last updated December 2016) | ||
== XML files == | == XML files == | ||
Line 282: | Line 294: | ||
If the robot is not complete (or some parts need to be disabled): | If the robot is not complete (or some parts need to be disabled): | ||
* | * In a pc104 shell, type <code>yarp-config robot --list</code> to look for the .ini files from where the configuration values are launched: these are local copies of the robots-configuration repository files | ||
* Go to <code>INSTALLED DATA</code> directory path and then within the corresponding robot folder ( | * Go to the <code>INSTALLED DATA</code> directory path and then, within the corresponding robot folder (e.g., iCubLisboa01), look for the file yarprobotinterface.ini, which points to a .xml file which contains the configuration paths for all the robot parts (e.g., icub_all.xml) | ||
* If the local config file does not exist, there is only the canonical in the build path, | * If the local config file does not exist, there is only the canonical file in the build path: in that case, create a local one using <code>yarp-config --import</code> | ||
* | * Make a copy of the .xml file giving it a descriptive name (e.g., icub_no_legs.xml) and, in the copied file, comment or remove all lines that refer to .ini files of the part(s) that you want to disable | ||
* | * Edit the contents of yarprobotinterface.ini so that it points to the new .xml file where the parts have been commented | ||
* | * In the iCubStartup application GUI or xml, modify the way that gravityCompensator and wholeBodyDynamics are launched, so that they don't look for the configuration files of the parts that have been disabled: in the legs example, just add <code>--no_legs</code> to the argument list | ||
= See also = | = See also = |
Latest revision as of 16:06, 28 July 2019
In this page we describe the setup of the computers connected to the iCub robot, which all share a common network disk and configuration.
See iCub machines configuration/Archive for obsolete information.
Operating system installation
Ubuntu LTS, default settings and partitioning. The first user to be created must be called icub, to make the distributed setup possible: for NFS network mount, this user has to have uid 1000 and guid 1000.
In order to add a user:
- either use the Ubuntu graphical frontends
- or use a Terminal:
sudo adduser icub sudo usermod -aG sudo icub # gives sudo privileges
Other operations
Network configuration
See also: VisLab network, ISR computing resources.
Configure a static IP as explained in one of the following subsections, depending if the machine is a desktop or a server one.
Also, it is recommended to set up the /etc/hosts
file as follows:
10.10.1.41 icubbrain1 10.10.1.42 icubbrain2 10.10.1.50 pc104 10.10.1.51 icub-cuda 10.10.1.53 icub-laptop
You should be able to do ping icubbrain1
Desktop machines
With the graphical Network Manager (https://help.ubuntu.com/16.04/ubuntu-help/net-fixed-ip-address.html), configure the connection "Auto eth0" IPv4 as follows:
Address | Netmask | Gateway | DNS Servers | notes |
---|---|---|---|---|
10.10.1.x | 255.255.255.0 | 10.10.1.254 | 10.0.0.1, 10.0.0.2 | visnet (iCub machines) |
10.0.x.y | 255.255.0.0 | 10.0.0.254 | 10.0.0.1, 10.0.0.2 | isrnet (rest of ISR) |
Servers
Edit /etc/network/interfaces
like this:
auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.x.y.z # put your IP here, see above table netmask 255.255.x.y # see above table network 10.10.1.0 broadcast 10.10.1.255 gateway 10.10.1.254 dns-nameservers 10.0.0.1 10.0.0.2
In some versions of Ubuntu, to configure DNS you also need to edit /etc/resolvconf/resolv.conf.d/head
like this:
nameserver 10.0.0.1 nameserver 10.0.0.2
then run:
sudo resolvconf -u
Dependencies
Installing the icub-common metapackage is sufficient. It is a bundle of the following packages (for more details see here, here and here):
sudo apt install build-essential libace-dev libedit-dev libeigen3-dev libncurses5-dev gfortran libtinyxml-dev libgraphviz-dev sudo apt install git-core ssh gcc g++ make cmake-curses-gui freeglut3-dev libxmu-dev libswscale-dev libavformat-dev
Qt5 graphics dependencies in Ubuntu 16.04 Xenial:
sudo apt install qttools5-dev qtdeclarative5-dev qtdeclarative5-controls-plugin qtdeclarative5-dialogs-plugin qtmultimedia5-dev qtdeclarative5-qtmultimedia-plugin qtquick1-5-dev libqt5opengl5-dev
Qt5 graphics dependencies in Debian Stretch:
sudo apt install qml-module-qt-labs-folderlistmodel qml-module-qt-labs-settings
iCub Simulator dependencies: SDL and ODE.
GTK graphical programs are obsolete and replaced by their Qt equivalents. If you still want the old GTK programs, install libgtkmm-2.4-dev
Environment variables
- Create a file called ~/.bashrc_iCub like this one. Usually you do not need all of the following variables and settings, just a subset.
export ROBOT_CODE=/usr/local/src/robot
export YARP_ROOT=$ROBOT_CODE/yarp export YARP_DIR=$YARP_ROOT/build export ICUB_ROOT=$ROBOT_CODE/icub-main export ICUB_DIR=$ICUB_ROOT/build
export ICUBcontrib_DIR=$ROBOT_CODE/icub-contrib-common/build export YARP_DATA_DIRS=$YARP_DIR/share/yarp:$ICUB_DIR/share/iCub:$ICUBcontrib_DIR/share/ICUBcontrib:$ROBOT_CODE/speech/svox-speech/build/share/speech export PATH=$PATH:$YARP_DIR/bin:$ICUB_DIR/bin:$ICUBcontrib_DIR/bin
export ODE_DIR=$ROBOT_CODE/ode-0.13/build
# OpenCV: select one of the following #export OpenCV_DIR=$ROBOT_CODE/opencv2/build #export OpenCV_DIR=$ROBOT_CODE/opencv2/build-cuda export OpenCV_DIR=$ROBOT_CODE/opencv3/build #export OpenCV_DIR=$ROBOT_CODE/opencv3/build-cuda
# To enable tab completion on yarp port names if [ -f $YARP_ROOT/scripts/yarp_completion ]; then source $YARP_ROOT/scripts/yarp_completion fi
# Set the name of your robot here. export YARP_ROBOT_NAME=iCubLisboa01
# Set-up optimizations export CMAKE_BUILD_TYPE=Release
# DebugStream customization export YARP_VERBOSE_OUTPUT=0 export YARP_COLORED_OUTPUT=1 export YARP_TRACE_ENABLE=0 export YARP_FORWARD_LOG_ENABLE=0
# Lua ### DO NOT REMOVE ';;;' ### export LUA_PATH=";;;$ROBOT_CODE/rFSM/?.lua;$ICUBcontrib_DIR/share/ICUBcontrib/contexts/interactiveObjectsLearning/LUA/?.lua" export LUA_CPATH=";;;$YARP_ROOT/bindings/build-lua/?.so" export PATH=$PATH:$ROBOT_CODE/rFSM/tools:$ICUBcontrib_DIR/share/ICUBcontrib/contexts/interactiveObjectsLearning/LUA export PATH=$PATH:$YARP_ROOT/bindings/build-lua export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$YARP_ROOT/bindings/build-lua
- Then, before the following line of /etc/bash.bashrc
[ -z "$PS1" ] && return
add this:
# per-user environment variables (non-interactive and interactive mode) source $HOME/.bashrc_iCub
The reason why we use the above custom file (as opposed to the standard ~/.bashrc) is that we want to enforce the variables both during interactive and non-interactive sessions, such as commands launched via yarprun
from another machine.
See also https://git.robotology.eu/mbrunettini/icub-environment for the variables and scripts employed at IIT.
Additional software
OpenCV
Ubuntu packages
- this is the easiest way to install OpenCV, however the Ubuntu packages might be too old and/or missing some specific features (in that case proceed with a manual installation, see below)
sudo apt install libcv-dev libhighgui-dev libcvaux-dev libopencv-gpu-dev
Manual compilation
- on most machines: download OpenCV, create a build directory, CMake, set WITH_CUDA=OFF, compile, set OpenCV_DIR to the path of OpenCV-x.y.z/build, for example:
export OpenCV_DIR=$code/OpenCV-x.y.z/build
- on CUDA machines, in order to compile CUDA-enabled modules: create a build-cuda directory, CMake, set WITH_CUDA=ON, compile, set OpenCV_DIR to the path of OpenCV-x.y.z/build-cuda. You may need to disable
cudacodec
, see also https://github.com/opencv/opencv_contrib/issues/1786
YARP and iCub
- In general, when compiling this software, do not use
sudo make install
but simplymake
(and configure the PATH variable in such a way that it finds the binaries from the build directory).
- If you work with the robot, use the volume shares exported from the NFS server.
- In other cases, follow the instructions on the iCub software article.
- yarp CMake configuration
CMAKE_BUILD_TYPE Release YARP_COMPILE_GUIS YARP_COMPILE_libYARP_MATH
// to enable 640x480@30Hz images with Bayer encoding // install libraw1394-dev libdc1394-22-dev then enable CREATE_OPTIONAL_CARRIERS ENABLE_yarpcar_bayer ON ENABLE_yarpcar_mjpeg ON
- icub-main CMake configuration
CMAKE_BUILD_TYPE Release // on servers, do http://wiki.icub.org/wiki/Installing_IPOPT then enable ENABLE_icubmod_cartesiancontrollerclient ON ENABLE_icubmod_cartesiancontrollerserver ON ENABLE_icubmod_gazecontrollerclient ON
- final configuration
yarp namespace /icub
yarp conf 10.10.1.53 10000
(yarpserver runs on iCub laptop)
- special machines such as pc104 need different flags
CUDA
Prerequisites
sudo apt install freeglut3-dev libdevil-dev libglew-dev sudo apt purge libcudart* // because we will manually install it
Troubleshooting: http://askubuntu.com/questions/410604/installing-nvidia-drivers-with-pkg1-run-ends-with-no-version-h-found
CUDA Toolkit, SDK and Examples
- stop X servers:
sudo service lightdm stop
(orgdm stop
depending on configuration) - download and install NVIDIA CUDA Toolkit from https://developer.nvidia.com/cuda-downloads
- preferably, use Installer Type: runfile (local). Alternatively, deb (local)
- if you obtain the error "Toolkit: Installation Failed. Using unsupported Compiler.", use the override option, e.g.,
./cuda_6.0.37_linux_64.run --override
- if you obtain the error "The driver installation is unable to locate the kernel source. Please make sure that the kernel source packages are installed and set up correctly":
- read the CUDA log in /tmp and verify that the graphics card is currently supported -- if not, you might need to install a legacy NVIDIA driver. For example, the Quadro FX 580 card needs NVIDIA legacy drivers 340.xx: install them and then answer no when CUDA Toolkit installer asks "Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 346.46?"
sudo apt install linux-generic linux-headers-$(uname -r) linux-headers-generic-lts-trusty
(or other Ubuntu version codename)- call the installer specifying the kernel source path, e.g.,
./cuda_7.0.28_linux.run --kernel-source-path=/usr/src/linux-headers-3.13.0-52-generic/
- output of successful installation:
Driver: Installed Toolkit: Installed in /usr/local/cuda-7.0 Samples: Not Selected Please make sure that - PATH includes /usr/local/cuda-7.0/bin - LD_LIBRARY_PATH includes /usr/local/cuda-7.0/lib64, or, add /usr/local/cuda-7.0/lib64 to /etc/ld.so.conf and run ldconfig as root To uninstall the CUDA Toolkit, run the uninstall script in /usr/local/cuda-7.0/bin To uninstall the NVIDIA Driver, run nvidia-uninstall
- troubleshooting:
- http://askubuntu.com/questions/451672/installing-and-testing-cuda-in-ubuntu-14-04
- http://troylee2008.blogspot.pt/2012/05/linking-error-while-compiling-cuda-sdk.html
- https://devtalk.nvidia.com/default/topic/617414/-solved-cuda-driver-version-is-insufficient-for-cuda-runtime-version-fedora-18-rpmfusion-driver/
SiftGPU
- download it from http://cs.unc.edu/~ccwu/siftgpu/
- unzip it - typically in your home directory, not in the NFS shared folder
- compile with
make
- if you obtain "unspecified launch failure" and 0 sift features/matches, check that the X server is running:
sudo service lightdm start
- if you obtain the error "/usr/local/cuda/bin/nvcc: Command not found", this can help:
sudo ln -s /usr/lib/nvidia-cuda-toolkit /usr/local/cuda
. See also http://askubuntu.com/questions/231503/nvcc-compiler-setup-ubuntu-12-04 - run the test program
SimpleSIFT
- it should work via ssh, as well as in a local session - example of successful execution:
$ ./SimpleSIFT Unable to create OpenGL Context! For nVidia cards, you can try change to CUDA mode in this case NOTE: changing maximum texture dimension to 32768 [SiftGPU Language]: CUDA Image size : 800x600 Image loaded : ../data/800-1.jpg #Features: 3347 #Features MO: 3910 [RUN SIFT]: 0.339 Image size : 640x480 Image loaded : ../data/640-1.jpg #Features: 2372 #Features MO: 2767 [RUN SIFT]: 0.208 NOTE: changing maximum texture dimension to 32768 [SiftMatchGPU]: CUDA 2247 sift matches were found;
- define
export SIFTGPU_DIR=~/SiftGPU
or similar in your iCub bashrc file, so that libsiftgpu.so is found by GPU accelerated modules
himrep
Clone https://github.com/robotology/himrep, follow the instructions to compile liblinear
, CMake, make install
To use the deep neural network object recognition, based on Caffe, follow the instructions at README_Caffe.md.
If you get "error: kernel launches from templates are not allowed in system files", use an older GCC version like 4.6 (see also https://github.com/BVLC/caffe/issues/337). If you get "ImportError: No module named 'yaml'", do sudo apt install python-yaml
.
IOL
sudo apt install lua5.1 liblua5.1-dev
- clone rFSM (no need to compile anything here)
Clone https://github.com/robotology/iol, CMake, make install
stereo-vision
Clone https://github.com/robotology/stereo-vision, CMake with USE_SIFT_GPU=ON, make install
Best practices
Below are some tips and tricks taken from:
- http://wiki.icub.org/wiki/Yarp-config
- http://www.yarp.it/yarp-config.html
- IIT - How to setup and start the iCub from scratch (last updated December 2016)
XML files
Edit XML files locally, in /home/icub/.local/share/yarp
To install robot-specific XML files, compile icub-main (just make
) then use commands like yarp-config robot --import-all
(installs all files) or yarp-config robot --import iCubLisboa01 affordancesExploration.xml
(installs specific files)
ini files
To install application conf/*.ini files, compile a project (e.g., icub-main, poeticon, iol) then use commands like yarp-config context --import-all
(installs all files) or yarp-config context --import actionsRenderingEngine
(installs the files of specific applications)
Disabling some robot parts
If the robot is not complete (or some parts need to be disabled):
- In a pc104 shell, type
yarp-config robot --list
to look for the .ini files from where the configuration values are launched: these are local copies of the robots-configuration repository files - Go to the
INSTALLED DATA
directory path and then, within the corresponding robot folder (e.g., iCubLisboa01), look for the file yarprobotinterface.ini, which points to a .xml file which contains the configuration paths for all the robot parts (e.g., icub_all.xml) - If the local config file does not exist, there is only the canonical file in the build path: in that case, create a local one using
yarp-config --import
- Make a copy of the .xml file giving it a descriptive name (e.g., icub_no_legs.xml) and, in the copied file, comment or remove all lines that refer to .ini files of the part(s) that you want to disable
- Edit the contents of yarprobotinterface.ini so that it points to the new .xml file where the parts have been commented
- In the iCubStartup application GUI or xml, modify the way that gravityCompensator and wholeBodyDynamics are launched, so that they don't look for the configuration files of the parts that have been disabled: in the legs example, just add
--no_legs
to the argument list