Cluster Management in VisLab: Difference between revisions
mNo edit summary |
No edit summary |
||
Line 10: | Line 10: | ||
export ICUB_ROOT=$ICUB_DIR | export ICUB_ROOT=$ICUB_DIR | ||
We had to create a /tmp/run directory on the Cortex machines, writable by anyone, so that "yarp run" could run on those machines. | We had to create a /tmp/run directory on the Cortex machines (each one of them: these directories are different for each one), belonging to user icub and writable by anyone, so that "yarp run" could run on those machines. | ||
I have no clue why this is needed: on pc104 such directory doesn't seem to exist. | I have no clue why this is needed: on pc104 such directory doesn't seem to exist. |
Revision as of 22:03, 7 May 2009
Cluster Manager is a python-based GUI that lets the user check and influence the status of "yarp run" on a cluster of computers.
To run Cluster Manager, just go to $ICUB_DIR/app/default/scripts and run
./icub-cluster.py
by default, it reads configuration information from $ICUB_DIR/app/default/icub-cluster.xml (I GUESS, NOT SURE). We changed it to read configuration information from $ICUB_DIR/app/default/vislab-cluster.xml.
We had to write these two lines to ~/.bash_env on the Cortex machines (user icub) and on Chico2 in order to have remote execution with ssh to work:
export ICUB_DIR=/home/icub/iCub export ICUB_ROOT=$ICUB_DIR
We had to create a /tmp/run directory on the Cortex machines (each one of them: these directories are different for each one), belonging to user icub and writable by anyone, so that "yarp run" could run on those machines. I have no clue why this is needed: on pc104 such directory doesn't seem to exist.