Cluster Management in VisLab: Difference between revisions
m (→Usage and configuration of Cluster Manager: $ICUB_ROBOTNAME) |
|||
Line 4: | Line 4: | ||
To run Cluster Manager, just go to $ICUB_DIR/app/default/scripts and run | To run Cluster Manager, just go to $ICUB_DIR/app/default/scripts and run | ||
./icub-cluster.py | ./icub-cluster.py $ICUB_ROOT/app/$ICUB_ROBOTNAME/scripts/vislab-cluster.xml | ||
We had to write these two lines to ~/.bash_env on | We had to write these two lines to ~/.bash_env on every machine (Cortex, iCubBrain, chico2, chico3 -- username icub), in order to have remote execution (i.e., non-interactive) with ssh to work: | ||
export ICUB_DIR=/home/icub/iCub | export ICUB_DIR=/home/icub/iCub | ||
export ICUB_ROOT=$ICUB_DIR | export ICUB_ROOT=$ICUB_DIR | ||
export ICUB_ROBOTNAME=iCubLisboa01 | |||
We had to create a /tmp/run directory on the Cortex machines (each one of them: these directories are different for each machine in the cluster!), belonging to user icub (it should be writable by anyone, but it probably isn't at the moment), so that "yarp run" could run there. The reason why this is needed is not clear at the moment: on pc104, such directory doesn't seem to exist. | We had to create a /tmp/run directory on the Cortex machines (each one of them: these directories are different for each machine in the cluster!), belonging to user icub (it should be writable by anyone, but it probably isn't at the moment), so that "yarp run" could run there. The reason why this is needed is not clear at the moment: on pc104, such directory doesn't seem to exist. |
Revision as of 18:22, 14 June 2009
Cluster Manager (description on LIRA-Lab wiki) is a Python-based GUI that lets a user check and influence the status of "yarp run" on a cluster of computers.
Usage and configuration of Cluster Manager
To run Cluster Manager, just go to $ICUB_DIR/app/default/scripts and run
./icub-cluster.py $ICUB_ROOT/app/$ICUB_ROBOTNAME/scripts/vislab-cluster.xml
We had to write these two lines to ~/.bash_env on every machine (Cortex, iCubBrain, chico2, chico3 -- username icub), in order to have remote execution (i.e., non-interactive) with ssh to work:
export ICUB_DIR=/home/icub/iCub export ICUB_ROOT=$ICUB_DIR export ICUB_ROBOTNAME=iCubLisboa01
We had to create a /tmp/run directory on the Cortex machines (each one of them: these directories are different for each machine in the cluster!), belonging to user icub (it should be writable by anyone, but it probably isn't at the moment), so that "yarp run" could run there. The reason why this is needed is not clear at the moment: on pc104, such directory doesn't seem to exist. WARNING: the content of this directory might disappear when we restart the Cortex computers, possibly causing problems.
yarprun.sh script
The script $ICUB_DIR/scripts/yarprun.sh assumes that every machine has a unique name, obtainable with the command: "uname -n". As of 2009-05-12, this works on Cortex as well; for an explanation of how we enforced the desired behaviour on the cluster, see page Cluster Management in VisLab/Archive.