Cortex: Difference between revisions

From ISRWiki
Jump to navigation Jump to search
(add Vislab category tag)
(IP address)
 
(141 intermediate revisions by 4 users not shown)
Line 1: Line 1:
Cortex is a computation rack for VisLab humanoid robots.  
Cortex is a server used by VisLab for running simulations (not the ones with the iCub robot).


= The Cluster =
''Old information can be consulted at [[Cortex/Archive]].''


It contains 7 machines:
= Specifications =
* 1 server that manages startup, shutdown and the file system of the clients.
* 6 clients that run the user processes.


All clients mount the same file system. Therefore, performing changes in the file system of one of the clients will reflect to all others.
As of 2017, there is one machine (cortex1) with these specs:
 
* 8 x [http://ark.intel.com/products/65523/Intel-Core-i7-3770K-Processor-(8M-Cache-up-to-3_90-GHz) i7-3770K] @ 3.50GHz processor
= The Network =
* 16GB of memory (<code>sudo dmidecode --type 17</code> to see RAM speed and type)
 
* 112GB SSD drive + 1TB HDD drive
Cortex machines are at the vislab robotics network domain:
* NVidia [http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-670 GeForce GTX 670] graphics card (CUDA)
* Domain: visnet
* internal ISR IP address: 10.10.1.1
* Subnet: 10.10.1.*
 
== Cortex nodes ==
 
Cortex server and clients have the following ip's and domain names:
* Server:  10.10.1.240, server.visnet
* Client 1: 10.10.1.1,  cortex1.visnet
* Client 2: 10.10.1.2,  cortex2.visnet
* Client 3: 10.10.1.3,  cortex3.visnet
* Client 4: 10.10.1.4,  cortex4.visnet
* Client 5: 10.10.1.5,  cortex5.visnet
* Client 6: 10.10.1.6,  cortex6.visnet
 
== Other nodes ==
 
Other assigned ip's and names are:
* Gateway: 10.10.1.254, gtisr.visnet
* Cortex Switch: 10.10.1.250, swcompurack.visnet
* Vislab Switch: 10.10.1.251, swvislab.visnet
* DHCP Range: 10.10.1.100-199
* Chico Net: 10.10.1.50-59
* Chica Net: 10.10.1.60-69
* Balta Net: 10.10.1.70-79
 
== Connectivity ==
Cortex machines are connected to Cortex Switch, that links to Vislab switch with a fiber optic connection of 4Gb/s.
 
== Traffic ==
Network traffic can be checked at:
* http://inode1.isrnet/cacti (user guest, pass guest)
 
= The Cortex Server =
 
The server has:
* Boot folder for the clients at /tftpboot/pxelinux.cfg. Contains the files:
** default - default boot file
** <mac_address> - specific for a machine with the given mac address.
* startup scripts for each machine at /nfsroot/app
 
= The Cortex Clients =
 
== Configuration ==
 
The clients have:
* A superuser account (compurack) to administer system wide settings (configurations, libs, etc)
* Normal user accounts. The logon script runs by default the content of file $HOME/.bash_env, where users can set their environment variables, e.g. export ICUB_DIR=$HOME/iCub.
* A yarp account to update and install the yarp library. YARP_DIR is set by default to /home/yarp/yarp2 to all users (in /etc/bash.bashrc).
 
= Global Libraries and Repositories =
 
== YARP ==
Yarp was set using the following commands (after logging in as yarp):
  cvs -d:pserver:anonymous@yarp0.cvs.sourceforge.net:/cvsroot/yarp0 login
  cvs -z3 -d:pserver:anonymous@yarp0.cvs.sourceforge.net:/cvsroot/yarp0 co -P yarp2
  cd yarp2
  cmake .    (or ccmake .)
  make
  make test
 
== OTHER ==
 
Other system wide libraries/apps are installed by the superuser. Currently the following libraries are installed:
 
CURSES
  apt-get install libncurses5-dev
 
ACE
  apt-get install libace-dev
 
CMAKE
  apt-get install cmake
 
GSL
  apt-get install libgsl0-dev
 
GTK/GTKMM/GLADE
  apt-get install libgtk2.0-dev
  apt-get install libgtkmm-2.4-dev
  apt-get install libglademm-2.4-dev
OPENCV
  THE REPOSITORY IS NOW IN SVN FORM, WE NEED TO UPDATE THIS.
  cvs -z3 -d:pserver:anonymous@opencvlibrary.cvs.sourceforge.net:/cvsroot/opencvlibrary co -P opencv
  cd opencv
  ./configure
  make
  make install
  add /usr/local/lib to /etc/ld.so.conf
 
= User Repositories =
 
Each user should manage its own repositories, e.g. the iCub repository:
  cvs -d vislab@cvs.robotcub.org:/cvsroot/robotcub co iCub
 
= Other configurations =
 
Tuning network:
  sysctl -w net.core.rmem_max=8388608
  sysctl -w net.core.wmem_max=8388608
  sysctl -w net.core.rmem_default=65536
  sysctl -w net.core.wmem_default=65536
  sysctl -w net.ipv4.tcp_rmem='4096 87380 8388608'
  sysctl -w net.ipv4.tcp_wmem='4096 65536 8388608'
  sysctl -w net.ipv4.tcp_mem='8388608 8388608 8388608'
  sysctl -w net.ipv4.route.flush=1
 
Prompt ($PS1):<br>
The prompt is set to "user@cortex?$" in /etc/bash.bashrc. With those setting, if you log in to Cortex1, the prompt will be "user@cortex1$".
We chose to do so because sometimes it's convenient to have the number of the Cortex machine you're working on embedded in the prompt.
By default, though, this configuration is overridden in the users' ~/.bashrc file, and the prompt is set to "user@source" regardless of the Cortex machine you log in to.<br>
If you want to inhibit this behaviour in ~/bash.rc and thus have a prompt like "user@cortex?", just comment these lines in your ~/.bashrc:
  # set a fancy prompt (non-color, unless we know we "want" color)
  case "$TERM" in
  xterm-color)
      PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
      ;;
  *)
      PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
      ;;
  esac
 
= Helper commands =
 
* Check the kernel : uname -m
 
* Check the file versions : file
 
* Set bash shell in /etc/passwd
 
* Check disk space: du –h –s /home
 
* Check per user processes: ps -U <user>


[[Category:Vislab]]
[[Category:Vislab]]

Latest revision as of 13:59, 26 January 2018

Cortex is a server used by VisLab for running simulations (not the ones with the iCub robot).

Old information can be consulted at Cortex/Archive.

Specifications

As of 2017, there is one machine (cortex1) with these specs:

  • 8 x i7-3770K @ 3.50GHz processor
  • 16GB of memory (sudo dmidecode --type 17 to see RAM speed and type)
  • 112GB SSD drive + 1TB HDD drive
  • NVidia GeForce GTX 670 graphics card (CUDA)
  • internal ISR IP address: 10.10.1.1