|
|
(110 intermediate revisions by 2 users not shown) |
Line 1: |
Line 1: |
| Cortex is a computation rack for VisLab humanoid robots. It contains 7 machines: | | Cortex is a server used by VisLab for running simulations (not the ones with the iCub robot). |
| * 1 server that manages startup, shutdown and the file system of the clients;
| |
| * 6 clients (named <code>cortex1</code>...<code>cortex6</code>) that run user processes.
| |
| All clients numbered 1 to 5 mount the same file system. Therefore, performing changes in the file system of cortex[1-5] will reflect to all other four clients.
| |
| The client <code>cortex6</code> is separate for now, because it runs a 64 bit Linux.
| |
|
| |
|
| = Network setup =
| | ''Old information can be consulted at [[Cortex/Archive]].'' |
|
| |
|
| Cortex machines are in the VisLab robotics network domain:
| | = Specifications = |
| * Domain: visnet
| |
| * Subnet: 10.10.1.*
| |
|
| |
|
| == Cortex nodes ==
| | As of 2017, there is one machine (cortex1) with these specs: |
| | | * 8 x [http://ark.intel.com/products/65523/Intel-Core-i7-3770K-Processor-(8M-Cache-up-to-3_90-GHz) i7-3770K] @ 3.50GHz processor |
| Cortex server and clients have the following IPs and domain names:
| | * 16GB of memory (<code>sudo dmidecode --type 17</code> to see RAM speed and type) |
| * Server: 10.10.1.240, server.visnet
| | * 112GB SSD drive + 1TB HDD drive |
| * Client 1: 10.10.1.1, cortex1.visnet
| | * NVidia [http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-670 GeForce GTX 670] graphics card (CUDA) |
| * Client 2: 10.10.1.2, cortex2.visnet
| | * internal ISR IP address: 10.10.1.1 |
| * Client 3: 10.10.1.3, cortex3.visnet
| |
| * Client 4: 10.10.1.4, cortex4.visnet
| |
| * Client 5: 10.10.1.5, cortex5.visnet
| |
| * Client 6: 10.10.1.6, cortex6.visnet
| |
| | |
| == Other nodes ==
| |
| | |
| Other assigned ip's and names are:
| |
| * Gateway: 10.10.1.254, gtisr.visnet
| |
| * Cortex Switch: 10.10.1.250, swcompurack.visnet
| |
| * Vislab Switch: 10.10.1.251, swvislab.visnet
| |
| * iCubBrain1: 10.10.1.41, icubbrain1.visnet
| |
| * iCubBrain2: 10.10.1.42, icubbrain2.visnet
| |
| * DHCP Range: 10.10.1.100-199
| |
| * Chico pc104: 10.10.1.50
| |
| * Chico clients: 10.10.51-59
| |
| * Chica Net: 10.10.1.60-69
| |
| * Balta Net: 10.10.1.70-79
| |
| | |
| == DNS configuration ==
| |
| | |
| Name resolution in the visnet network is managed by a BIND system on the machine <code>server.visnet</code>. To add or change the table of names and IPs, do '<code>cd /etc/bind</code>', edit and change the file '<code>/visnet.src</code>' (need super user permission) and then run '<code>make</code>'. After a while (the tables must be copied to ISR DNS server, which may take a few minutes) you can ping the new machines by name.
| |
| | |
| As of September 2009, the configuration file is as follows: | |
| | |
| ## 10:45, 2 September 2009 (UTC)10:45, 2 September 2009 (UTC)10:45, 2 September 2009 (UTC)10:45, 2 September 2009 (UTC)[[User:Alex|Alex]]
| |
| ## Tabela de DNS -- VisNet
| |
| ## 10:45, 2 September 2009 (UTC)10:45, 2 September 2009 (UTC)10:45, 2 September 2009 (UTC)10:45, 2 September 2009 (UTC)[[User:Alex|Alex]]
| |
| ##
| |
| domain("visnet")
| |
| prefix("10.10.1")
| |
| soa("server.isrnet", "alex", "isr.ist.utl.pt")
| |
| ns("server.visnet")
| |
| ##
| |
| ## Sintaxe:
| |
| ## NOME ------- IP ---- COMENTARIO (entre aspas) ----------------------
| |
| ##
| |
| a(cortex1, 1, "Compurack client 1")
| |
| a(cortex2, 2, "Compurack client 2")
| |
| a(cortex3, 3, "Compurack client 3")
| |
| a(cortex4, 4, "Compurack client 4")
| |
| a(cortex5, 5, "Compurack client 5")
| |
| a(cortex6, 6, "Compurack client 6")
| |
| a(cortex7, 7, "Compurack client 7")
| |
| a(icubbrain1, 41, "Supermicro 1")
| |
| a(icubbrain2, 42, "Supermicro 2")
| |
| a(pc104, 50, "Chico pc104")
| |
| a(icubsrv, 51, "iCub chico")
| |
| a(chico2, 52, "iCub chico")
| |
| a(chico3, 53, "iCub chico")
| |
| a(chico4, 54, "iCub chico")
| |
| a(chico5, 55, "iCub chico")
| |
| a(chico6, 56, "iCub chico")
| |
| a(chico7, 57, "iCub chico")
| |
| a(chico8, 58, "iCub chico")
| |
| a(chico9, 59, "iCub chico")
| |
| a(chica, 60, "iCub chica")
| |
| a(chica1, 61, "iCub chica")
| |
| a(chica2, 62, "iCub chica")
| |
| a(chica3, 63, "iCub chica")
| |
| a(chica4, 64, "iCub chica")
| |
| a(chica5, 65, "iCub chica")
| |
| a(chica6, 66, "iCub chica")
| |
| a(chica7, 67, "iCub chica")
| |
| a(chica8, 68, "iCub chica")
| |
| a(chica9, 69, "iCub chica")
| |
| a(balta, 70, "Baltazar")
| |
| a(balta1, 71, "Baltazar")
| |
| a(balta2, 72, "Baltazar")
| |
| a(balta3, 73, "Baltazar")
| |
| a(balta4, 74, "Baltazar")
| |
| a(balta5, 75, "Baltazar")
| |
| a(balta6, 76, "Baltazar")
| |
| a(balta7, 77, "Baltazar")
| |
| a(balta8, 78, "Baltazar")
| |
| a(balta9, 79, "Baltazar")
| |
| range(dhcp-, 100, 199)
| |
| a(server, 240, "Compurack server")
| |
| a(swcompurack, 250, "Compurack switch")
| |
| a(swvislab, 251, "Vislab switch")
| |
| a(gtisr, 254, "gateway")
| |
| ## -- EOF --
| |
| ### Local Variables: ###
| |
| ### gendns-keywords: ("^domain" "^prefix" "^soa" "^ns" "^mx" "^a" "^range" "^rangex" "^cname" ("\t" . highlight)) ###
| |
| ### font-lock-defaults: (gendns-keywords nil t ((?# . "<") (?\n . ">")) nil) ###
| |
| ### mode: font-lock ###
| |
| ### End: ###
| |
| | |
| == Connectivity ==
| |
| Cortex machines are connected to Cortex Switch, that links to VisLab switch with a fiber optic connection of 4Gbit/s.
| |
| | |
| == Traffic ==
| |
| | |
| Network traffic can be checked at:
| |
| * http://inode1.isrnet/cacti (user guest, pass guest)
| |
| | |
| = Additional setup =
| |
| | |
| == Server machine ==
| |
| | |
| The server has:
| |
| * A boot folder for the clients at <code>/tftpboot/pxelinux.cfg</code>. It contains the files:
| |
| ** <code>default</code> - default boot file;
| |
| ** <mac_address> - specific for a machine with the given mac address.
| |
| * startup scripts for each machine at <code>/nfsroot/app</code>
| |
| | |
| == Client machines ==
| |
| | |
| The clients have:
| |
| * A superuser account (<code>compurack</code>) to administer system-wide settings (configurations, libraries, etc.)
| |
| * Normal user accounts. By default, the login script runs the contents of file <code>$HOME/.bash_env</code>, where users can set their environment variables, e.g., <code>export ICUB_DIR=$HOME/iCub</code>. This works for both interactive shell sessions and non-interactive ones (i.e., commands remotely invoked by <code>yarprun</code>).
| |
| * A <code>yarp</code> account to update and install the YARP library. Variable <code>YARP_DIR</code> is set by default to <code>/home/yarp/yarp2</code> for all users (in <code>/etc/bash.bashrc</code>). | |
| * An <code>icub</code> account with sudo privileges (created with <code>sudo adduser icub admin</code> on 2009-06-30). | |
| | |
| == System-wide libraries and repositories ==
| |
| | |
| === YARP ===
| |
| | |
| As reported on the [[VisLab logbook]], in September 2009 we installed the [[RobotCub software | yarp2 SVN repository]] under user <code>yarp</code>, by downloading it and then performing <code>cmake .</code>, <code>make</code>, <code>sudo make install</code>.
| |
| | |
| === iCub ===
| |
| | |
| As reported on the [[VisLab logbook]], in September 2009 we installed the [[RobotCub software | iCub SVN repository]] under user <code>icub</code>, by downloading it and then performing <code>cmake .</code>, <code>make</code>, <code>sudo make install</code>. There was a conflict with iKin, which could not find <code>libipopt.so.0</code>, but it is now fixed thanks to setting the environment variable
| |
| LD_LIBRARY_PATH=/opt/Ipopt-3.5.5-linux-x86_32-gcc4.2.4/lib/
| |
| into <code>/home/icub/.bash_env</code>.
| |
| | |
| === Other libraries, manually installed ===
| |
| | |
| Please list here the system-wide libraries and applications that were installed by the superuser, especially the ones that do not have a clean 'make install' procedure but were manually installed into <code>/opt</code>:
| |
| | |
| * ARToolKit
| |
| * Ipopt-3.5.5-linux-x86_32-gcc4.2.4
| |
| | |
| === Other libraries, cleanly installed ===
| |
| | |
| These packages were installed with <code>apt-get install</code>
| |
| libncurses5-dev
| |
| libace-dev
| |
| cmake
| |
| libgsl0-dev
| |
| libgtk2.0-dev libgtkmm-2.4-dev libglademm-2.4-dev
| |
| glew-utils libglew1.5-dev
| |
| libglut-dev
| |
| | |
| OpenCV:
| |
| THE REPOSITORY IS NOW IN SVN FORM, WE NEED TO UPDATE THIS.
| |
| cvs -z3 -d:pserver:anonymous@opencvlibrary.cvs.sourceforge.net:/cvsroot/opencvlibrary co -P opencv
| |
| cd opencv
| |
| ./configure
| |
| make
| |
| make install
| |
| add /usr/local/lib to /etc/ld.so.conf
| |
| | |
| == User repositories ==
| |
| | |
| RE-THINK THIS POLICY (plus, we installed iCub svn):
| |
| | |
| Each user should manage its own repositories, e.g. the iCub repository:
| |
| cvs -d vislab@cvs.robotcub.org:/cvsroot/robotcub co iCub
| |
| then you should add <iCub>/bin to your PATH by editing your ~/.bashrc like this:
| |
| PATH=$PATH:~/iCub/bin/
| |
| ICUB_DIR=~/iCub/
| |
| export ICUB_DIR
| |
| ICUB_ROOT=$ICUB_DIR
| |
| export ICUB_ROOT
| |
| | |
| You should also edit ~/.bash_env adding these lines:
| |
| export ICUB_DIR=$HOME/iCub
| |
| export ICUB_ROOT=$ICUB_DIR
| |
| this is needed when you connect non-interactively via ssh to a Cortex computer, for instance when execute a "yarp run ..." on a Cortex, from Chico2.
| |
| | |
| Be aware that Ubuntu 7.10 (the version currently installed on the cluster) has a conflict with iKin, specifically with iCub/conf/FindIPOPT.cmake (used by iKin): for now, in order to compile iKin, change the following line of FindIPOPT.cmake from
| |
| SET(IPOPT_LIB ${IPOPT_LIB} gfortranbegin gfortran)
| |
| to
| |
| SET(IPOPT_LIB ${IPOPT_LIB} gfortran)
| |
| | |
| = Other configuration =
| |
| | |
| == Network tuning ==
| |
| | |
| sysctl -w net.core.rmem_max=8388608
| |
| sysctl -w net.core.wmem_max=8388608
| |
| sysctl -w net.core.rmem_default=65536
| |
| sysctl -w net.core.wmem_default=65536
| |
| sysctl -w net.ipv4.tcp_rmem='4096 87380 8388608'
| |
| sysctl -w net.ipv4.tcp_wmem='4096 65536 8388608'
| |
| sysctl -w net.ipv4.tcp_mem='8388608 8388608 8388608'
| |
| sysctl -w net.ipv4.route.flush=1
| |
| | |
| == Prompt ($PS1) ==
| |
| The prompt is set to "user@cortex?:pwd$" in /etc/bash.bashrc. With those settings, if you log in to Cortex1, the prompt will be "user@cortex1:~$".
| |
| We chose to do so because sometimes it's convenient to have the number of the Cortex machine you're working on embedded in the prompt.
| |
| By default, though, this configuration is overridden in the users' ~/.bashrc file, and the prompt is set to "user@source" regardless of the Cortex machine you log in to.<br>
| |
| If you want to inhibit this behaviour in ~/.bashrc and thus have a prompt like "user@cortex?:pwd", just comment these lines in your ~/.bashrc:
| |
| # set a fancy prompt (non-color, unless we know we "want" color)
| |
| case "$TERM" in
| |
| xterm-color)
| |
| PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
| |
| ;;
| |
| *)
| |
| PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
| |
| ;;
| |
| esac
| |
| However, for users created after 2009-05-07, the prompt is already set to "user@cortex?:pwd$" by default.
| |
| | |
| = Helper commands =
| |
| | |
| * Check the kernel : uname -m
| |
| | |
| * Check the file versions : file
| |
| | |
| * Set bash shell in /etc/passwd
| |
| | |
| * Check disk space: du –h –s /home
| |
| | |
| * Check per user processes: ps -U <user>
| |
|
| |
|
| [[Category:Vislab]] | | [[Category:Vislab]] |