Cortex: Difference between revisions
(→Specifications: add more specs) |
m (→Specifications: add links with Intel Desktop DG965SS mobo information) |
||
Line 10: | Line 10: | ||
More specifically: | More specifically: | ||
* each of the client machines has the following specs: Intel Desktop DG965SS motherboard; Intel DualCore E2180 CPU @ 2 GHz with cache 1 MB; 2 x RAM 1GB DDR2 677 MHz; rackmount 19" 2U case from Chieftec (UNC-210S-B), with ATX 300W power supply; | * each of the client machines has the following specs: Intel Desktop DG965SS motherboard ([http://www.intel.com/support/motherboards/desktop/dg965ss/sb/CS-026600.htm info1], [http://processormatch.intel.com/CompDB/SearchResult.aspx?Boardname=dg965ss info2]); Intel DualCore E2180 CPU @ 2 GHz with cache 1 MB; 2 x RAM 1GB DDR2 677 MHz; rackmount 19" 2U case from Chieftec (UNC-210S-B), with ATX 300W power supply; | ||
* the server machine has similar specs plus Intel PRO 1000GTLP/ENet gigabit ethernet card; 2 x WD2500JS 250GB SATAII 8MB disks for software RAID-1. | * the server machine has similar specs plus Intel PRO 1000GTLP/ENet gigabit ethernet card; 2 x WD2500JS 250GB SATAII 8MB disks for software RAID-1. | ||
Revision as of 12:52, 11 October 2011
Cortex is a cluster of 6 servers used by VisLab for development purposes (the other cluster, iCubBrain, is used for demos).
Old information can be consulted at Cortex/Archive.
Specifications
From an end-user perspective:
- each machine hosts an Intel Pentium Dual-Core processor (2 x E2180 @ 2.00GHz);
- memory: 2GB for each machine (
sudo dmidecode --type 17
to see RAM speed and type).
More specifically:
- each of the client machines has the following specs: Intel Desktop DG965SS motherboard (info1, info2); Intel DualCore E2180 CPU @ 2 GHz with cache 1 MB; 2 x RAM 1GB DDR2 677 MHz; rackmount 19" 2U case from Chieftec (UNC-210S-B), with ATX 300W power supply;
- the server machine has similar specs plus Intel PRO 1000GTLP/ENet gigabit ethernet card; 2 x WD2500JS 250GB SATAII 8MB disks for software RAID-1.
Setup
The Cortex computation rack actually contains 7 machines:
- 1 server that manages startup, shutdown and the file system of the clients;
- 6 clients (named
cortex1
...cortex6
) that run user processes.
All clients numbered 1 to 5 mount the same file system. Therefore, performing changes in the file system of cortex[1-5] will reflect to all other four clients. Beware, though, that because of the way the file systems are mounted, there is some caching going on. This improves disk access performance, but strange phenomena might happen, i.e., after a file is modified and saved on one client, other clients can continue to see the old version of it for some time (probably less than one minute).
The client cortex6
is separate for now, because it runs a 64 bit Linux.
Network setup
Connectivity
Cortex machines are connected to Cortex Switch, that links to VisLab switch with a fiber optic connection of 4Gbit/s.
Cortex nodes
Cortex server and clients have the following IPs and domain names:
- Server: 10.10.1.240, server.visnet
- Client 1: 10.10.1.1, cortex1.visnet
- Client 2: 10.10.1.2, cortex2.visnet
- Client 3: 10.10.1.3, cortex3.visnet
- Client 4: 10.10.1.4, cortex4.visnet
- Client 5: 10.10.1.5, cortex5.visnet
- Client 6: 10.10.1.6, cortex6.visnet
For further details, see VisLab network.
Additional setup
Boot procedure
The clients boot via the network, using the PXE system. Each machine determines its own identity and asks the server for a kernel image and an initial ram disk. Kernel images and initial ram disks are stored on the server in the /tftpboot/
directory. Kernels can be stock kernels, but the initial ram disk must be created in a way that enables booting from the network. This is not as bad as it sounds. It involves invoking the command mkinitramfs
.
The server decides which kernel and initramdisk to send to each machine based on the information stored in the two files: /tftpboot/pxelinux.cfg/default
(for cortexes 1-5) and /tftpboot/pxelinux.cfg/01-00-19-d1-9e-e9-53
(for cortex6).
The roots and the homes of the users are also stored on the server machine, so they are mounted by each client at boot time.
Mounting of root directory
We are not sure which mechanism mounts the root filesystem, exactly. Here is the relevant line from /etc/fstab
:
# <file system> <mount point> <type> <options> <dump> <pass> /dev/nfs / nfs defaults 1 1
Mounting of home directory
The home directory is mounted via the Upstart system a few seconds after booting. When rebooting the system, it is possible to login while /home
is still not mounted. In that case, log out and log in again, in order for your environment variables to be set correctly.
In November 2010, we created a file called /etc/init/mountHome-net.conf
containing:
description "Mount network filesystems" start on started networking or runlevel 2 exec /usr/local/bin/mountHome.sh
and /usr/local/bin/mountHome.sh
containing:
MOUNTED=$(mount | grep home) while [ -z "$MOUNTED" ] do su icub -c 'mount /home' &> /var/tmp/mountHomeUpstartOut.txt MOUNTED=$(mount | grep home) echo $MOUNTED sleep 1 done ls /home > /var/tmp/mountHomeUpstartLs.txt runlevel > /var/tmp/mountHomeUpstartRunlevel.txt
After a successful mount, we should see something like
$ mount | grep home 10.10.1.240:/nfsroot.home on /home type nfs (rw,user=icub,addr=10.10.1.240)
If /home
is wrongly mounted with the noexec
flag, users won't be able to execute binaries located inside it.
Server machine
The server has:
- a boot folder for the clients at
/tftpboot/pxelinux.cfg
. It contains the files:default
- default boot file;- <mac_address> - specific for a machine with the given mac address.
- startup scripts for each machine at
/nfsroot/app
Client machines
The clients have:
- A superuser account (
compurack
) to administer system-wide settings (configurations, libraries, etc.) - Normal user accounts. By default, the login script runs the contents of file
$HOME/.bash_env
, where users can set their environment variables, e.g.,export ICUB_ROOT=$HOME/iCub
. This works for both interactive shell sessions and non-interactive ones (i.e., commands remotely invoked byyarprun
). - A
yarp
account to update and install the YARP library. VariableYARP_ROOT
is set by default to/home/yarp/yarp2
for all users (in/etc/bash.bashrc
) <-- change this policy - An
icub
account with sudo privileges (created withsudo adduser icub admin
on 2009-06-30) <-- change this policy - cortex6's
/etc/hosts
file can include the following line:
127.0.0.1 cortex6
System-wide libraries and repositories
YARP
yarp2 is installed for user icub, similarly to iCubBrain server configuration.
For now, we don't use system-wide installation (sudo make install
). We could use it again after we verify that a user can easily override global settings.
iCub
iCub is installed for user icub, similarly to iCubBrain server configuration.
For now, we don't use system-wide installation (sudo make install
). We could use it again after we verify that a user can easily override global settings.
Other libraries, manually installed
Please list here the system-wide libraries and applications that were installed by the superuser, especially the ones that do not have a clean 'make install' procedure but were manually installed into /opt
:
- ARToolKit
- Ipopt-3.5.5-linux-x86_32-gcc4.2.4
CMake 2.6 does not come with the version of Ubuntu currently installed, but it is needed by the latest version of yarp, so we installed it via this archive.
- cmake 2.6
Other libraries, installed with Ubuntu packages
These packages were installed with apt-get install
libncurses5-dev libace-dev libgsl0-dev libgtk2.0-dev libgtkmm-2.4-dev libglademm-2.4-dev glew-utils libglew1.5-dev libglut-dev git-core
OpenCV:
libcv-dev libhighgui-dev libcvaux-dev
User repositories
Each user should manage his own yarp2 and iCub repositories. <-- then we shouldn't have done sudo make install here :)
We recommend to set your environment variables in a new file, called ~/.bash_env
and containing:
export YARP_ROOT=~/yarp2 export YARP_DIR=$YARP_ROOT/build export ICUB_ROOT=~/iCub export ICUB_DIR=$ICUB_ROOT/main/build export PATH=$PATH:$YARP_DIR/bin:$ICUB_DIR/bin
Refer to the RobotCub software article for further instructions.
Other configuration
Subversion
We have set the following parameter in /etc/subversion/config
:
store-passwords = no
This implies that SVN will ask you for your password every time you do a commit. (Don't worry about changing your personal ~/.subversion/config
file: the parameter is not actually set there, so the global /etc
setting is used.)
Network tuning
sysctl -w net.core.rmem_max=8388608 sysctl -w net.core.wmem_max=8388608 sysctl -w net.core.rmem_default=65536 sysctl -w net.core.wmem_default=65536 sysctl -w net.ipv4.tcp_rmem='4096 87380 8388608' sysctl -w net.ipv4.tcp_wmem='4096 65536 8388608' sysctl -w net.ipv4.tcp_mem='8388608 8388608 8388608' sysctl -w net.ipv4.route.flush=1
Prompt ($PS1)
The prompt is set to user@cortex?:pwd$
in /etc/bash.bashrc
. With those settings, if you log in to Cortex1, the prompt will be user@cortex1:~$
.
We chose to do so because sometimes it's convenient to have the number of the Cortex machine you're working on embedded in the prompt.
By default, though, this configuration is overridden in the users' ~/.bashrc
file, and the prompt is set to user@source
regardless of the Cortex machine you log in to.
If you want to inhibit this behaviour in ~/.bashrc
and thus have a prompt like user@cortex?:pwd
, just comment these lines in your ~/.bashrc
:
# set a fancy prompt (non-color, unless we know we "want" color) case "$TERM" in xterm-color) PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' ;; *) PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' ;; esac
However, for users created after 2009-05-07, the prompt is already set to user@cortex?:pwd$
by default.
Helper commands
- Check the kernel:
uname -m
- Check the file versions:
file
- Set bash shell in
/etc/passwd
- Check disk space:
du –sh /home
- Check per-user processes:
ps -U <user>