Cluster Management in VisLab: Difference between revisions
No edit summary |
m (update link) |
||
(18 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
Cluster Manager is a | ''This page is obsolete. Go back to [[iCub instructions]].'' | ||
Cluster Manager ([http://eris.liralab.it/wiki/Managing_Applications description] on LIRA-Lab wiki) is a Python-based GUI that lets a user check and influence the status of <code>yarprun</code> on a cluster of computers. | |||
== Usage and configuration of Cluster Manager == | |||
To run Cluster Manager, just go to $ICUB_DIR/app/default/scripts and run | To run Cluster Manager, just go to $ICUB_DIR/app/default/scripts and run | ||
./icub-cluster.py | ./icub-cluster.py $ICUB_ROOT/app/$ICUB_ROBOTNAME/scripts/vislab-cluster.xml | ||
We had to write these | We had to write these lines to ~/.bash_env on every machine (Cortex, iCubBrain, chico2, chico3 -- username icub), in order to have remote execution (i.e., non-interactive) with ssh to work: | ||
export ICUB_DIR=/home/icub/iCub | export ICUB_DIR=/home/icub/iCub | ||
export ICUB_ROOT=$ICUB_DIR | export ICUB_ROOT=$ICUB_DIR | ||
export ICUB_ROBOTNAME=iCubLisboa01 | |||
We had to create a /tmp/run directory on the Cortex machines (each one of them: these directories are different for each machine in the cluster!), belonging to user icub (it should be writable by anyone, but it probably isn't at the moment), so that "yarp run" could run there. The reason why this is needed is not clear at the moment: on pc104, such directory doesn't seem to exist. | |||
WARNING: the content of this directory might disappear when we restart the Cortex computers, possibly causing problems. | |||
== yarprun.sh script == | |||
The script $ICUB_DIR/scripts/yarprun.sh assumes that every machine has a unique name, obtainable with the command: "uname -n". ''As of 2009-05-12, this works on Cortex as well; for an explanation of how we enforced the desired behaviour on the cluster, see page [[Cluster Management in VisLab/Archive]]''. | |||
[[Category:Vislab]] | |||
Latest revision as of 11:00, 25 May 2015
This page is obsolete. Go back to iCub instructions.
Cluster Manager (description on LIRA-Lab wiki) is a Python-based GUI that lets a user check and influence the status of yarprun
on a cluster of computers.
Usage and configuration of Cluster Manager
To run Cluster Manager, just go to $ICUB_DIR/app/default/scripts and run
./icub-cluster.py $ICUB_ROOT/app/$ICUB_ROBOTNAME/scripts/vislab-cluster.xml
We had to write these lines to ~/.bash_env on every machine (Cortex, iCubBrain, chico2, chico3 -- username icub), in order to have remote execution (i.e., non-interactive) with ssh to work:
export ICUB_DIR=/home/icub/iCub export ICUB_ROOT=$ICUB_DIR export ICUB_ROBOTNAME=iCubLisboa01
We had to create a /tmp/run directory on the Cortex machines (each one of them: these directories are different for each machine in the cluster!), belonging to user icub (it should be writable by anyone, but it probably isn't at the moment), so that "yarp run" could run there. The reason why this is needed is not clear at the moment: on pc104, such directory doesn't seem to exist. WARNING: the content of this directory might disappear when we restart the Cortex computers, possibly causing problems.
yarprun.sh script
The script $ICUB_DIR/scripts/yarprun.sh assumes that every machine has a unique name, obtainable with the command: "uname -n". As of 2009-05-12, this works on Cortex as well; for an explanation of how we enforced the desired behaviour on the cluster, see page Cluster Management in VisLab/Archive.