Cortex: Difference between revisions

From ISRWiki
Jump to navigation Jump to search
No edit summary
Line 93: Line 93:
   apt-get install libglademm-2.4-dev
   apt-get install libglademm-2.4-dev
OPENCV
OPENCV
  THE REPOSITORY IS NOW IN SVN FORM, WE NEED TO UPDATE THIS.
   cvs -z3 -d:pserver:anonymous@opencvlibrary.cvs.sourceforge.net:/cvsroot/opencvlibrary co -P opencv
   cvs -z3 -d:pserver:anonymous@opencvlibrary.cvs.sourceforge.net:/cvsroot/opencvlibrary co -P opencv
   cd opencv
   cd opencv
Line 99: Line 100:
   make install
   make install
   add /usr/local/lib to /etc/ld.so.conf
   add /usr/local/lib to /etc/ld.so.conf


= User Repositories =
= User Repositories =

Revision as of 19:01, 23 April 2009

Cortex is a computation rack for VisLab humanoid robots.

The Cluster

It contains 7 machines:

  • 1 server that manages startup, shutdown and the file system of the clients.
  • 6 clients that run the user processes.

All clients mount the same file system. Therefore, performing changes in the file system of one of the clients will reflect to all others.

The Network

Cortex machines are at the vislab robotics network domain:

  • Domain: visnet
  • Subnet: 10.10.1.*

Cortex nodes

Cortex server and clients have the following ip's and domain names:

  • Server: 10.10.1.240, server.visnet
  • Client 1: 10.10.1.1, cortex1.visnet
  • Client 2: 10.10.1.2, cortex2.visnet
  • Client 3: 10.10.1.3, cortex3.visnet
  • Client 4: 10.10.1.4, cortex4.visnet
  • Client 5: 10.10.1.5, cortex5.visnet
  • Client 6: 10.10.1.6, cortex6.visnet

Other nodes

Other assigned ip's and names are:

  • Gateway: 10.10.1.254, gtisr.visnet
  • Cortex Switch: 10.10.1.250, swcompurack.visnet
  • Vislab Switch: 10.10.1.251, swvislab.visnet
  • DHCP Range: 10.10.1.100-199
  • Chico Net: 10.10.1.50-59
  • Chica Net: 10.10.1.60-69
  • Balta Net: 10.10.1.70-79

Connectivity

Cortex machines are connected to Cortex Switch, that links to Vislab switch with a fiber optic connection of 4Gb/s.

Traffic

Network traffic can be checked at:

The Cortex Server

The server has:

  • Boot folder for the clients at /tftpboot/pxelinux.cfg. Contains the files:
    • default - default boot file
    • <mac_address> - specific for a machine with the given mac address.
  • startup scripts for each machine at /nfsroot/app

The Cortex Clients

Configuration

The clients have:

  • A superuser account (compurack) to administer system wide settings (configurations, libs, etc)
  • Normal user accounts. The logon script runs by default the content of file $HOME/.bash_env, where users can set their environment variables, e.g. export ICUB_DIR=$HOME/iCub.
  • A yarp account to update and install the yarp library. YARP_DIR is set by default to /home/yarp/yarp2 to all users (in /etc/bash.bashrc).

Global Libraries and Repositories

YARP

Yarp was set using the following commands (after logging in as yarp):

  cvs -d:pserver:anonymous@yarp0.cvs.sourceforge.net:/cvsroot/yarp0 login
  cvs -z3 -d:pserver:anonymous@yarp0.cvs.sourceforge.net:/cvsroot/yarp0 co -P yarp2
  cd yarp2
  cmake .    (or ccmake .)
  make
  make test

OTHER

Other system wide libraries/apps are installed by the superuser. Currently the following libraries are installed:

CURSES

  apt-get install libncurses5-dev

ACE

  apt-get install libace-dev

CMAKE

  apt-get install cmake

GSL

  apt-get install libgsl0-dev

GTK/GTKMM/GLADE

  apt-get install libgtk2.0-dev
  apt-get install libgtkmm-2.4-dev
  apt-get install libglademm-2.4-dev

OPENCV

  THE REPOSITORY IS NOW IN SVN FORM, WE NEED TO UPDATE THIS.
  cvs -z3 -d:pserver:anonymous@opencvlibrary.cvs.sourceforge.net:/cvsroot/opencvlibrary co -P opencv
  cd opencv
  ./configure
  make
  make install
  add /usr/local/lib to /etc/ld.so.conf

User Repositories

Each user should manage its own repositories, e.g. the iCub repository:

  cvs -d vislab@cvs.robotcub.org:/cvsroot/robotcub co iCub

Other configurations

Tuning network:

  sysctl -w net.core.rmem_max=8388608
  sysctl -w net.core.wmem_max=8388608
  sysctl -w net.core.rmem_default=65536
  sysctl -w net.core.wmem_default=65536
  sysctl -w net.ipv4.tcp_rmem='4096 87380 8388608'
  sysctl -w net.ipv4.tcp_wmem='4096 65536 8388608'
  sysctl -w net.ipv4.tcp_mem='8388608 8388608 8388608'
  sysctl -w net.ipv4.route.flush=1


Helper commands

  • Check the kernel : uname -m
  • Check the file versions : file
  • Set bash shell in /etc/passwd
  • Check disk space: du –h –s /home
  • Check per user processes: ps -U <user>