Cluster Management in VisLab: Difference between revisions
No edit summary |
No edit summary |
||
Line 12: | Line 12: | ||
We had to create a /tmp/run directory on the Cortex machines (each one of them: these directories are different for each one), belonging to user icub (it should be writable by anyone, but I guess it's not at the moment), so that "yarp run" could run on those machines. | We had to create a /tmp/run directory on the Cortex machines (each one of them: these directories are different for each one), belonging to user icub (it should be writable by anyone, but I guess it's not at the moment), so that "yarp run" could run on those machines. | ||
I have no clue why this is needed: on pc104 such directory doesn't seem to exist. | I have no clue why this is needed: on pc104 such directory doesn't seem to exist. | ||
WARNING: the content of this directory might disappear when we restart the cortex computers, possibly causing problems. | |||
The scripts yarprun.sh assumes every machine has a unique name, obtainable with the command: "uname -n". | |||
In Cortex it's not like that: you get "source" on all of the 5 cortex computers. | |||
So we copied the script to yarprunVislab.sh and changed a line as follows: | |||
#ID=/`uname -n` | |||
ID=/cortex` ifconfig eth0 | grep "inet addr" | awk '{print $2}' | awk -F: '{print $2}' | awk -F. '{print $4}'` | |||
of course, we had to make a copy of $ICUB_DIR/scripts/icub-cluster.py to $ICUB_DIR/scripts/icub-clusterVislab.py and change all the invocation of yarprun.sh to yarprunVislab.sh in the latter. |
Revision as of 22:28, 7 May 2009
Cluster Manager is a python-based GUI that lets the user check and influence the status of "yarp run" on a cluster of computers.
To run Cluster Manager, just go to $ICUB_DIR/app/default/scripts and run
./icub-cluster.py
by default, it reads configuration information from $ICUB_DIR/app/default/icub-cluster.xml (I GUESS, NOT SURE). We changed it to read configuration information from $ICUB_DIR/app/default/vislab-cluster.xml.
We had to write these two lines to ~/.bash_env on the Cortex machines (user icub) and on Chico2 in order to have remote execution with ssh to work:
export ICUB_DIR=/home/icub/iCub export ICUB_ROOT=$ICUB_DIR
We had to create a /tmp/run directory on the Cortex machines (each one of them: these directories are different for each one), belonging to user icub (it should be writable by anyone, but I guess it's not at the moment), so that "yarp run" could run on those machines. I have no clue why this is needed: on pc104 such directory doesn't seem to exist. WARNING: the content of this directory might disappear when we restart the cortex computers, possibly causing problems.
The scripts yarprun.sh assumes every machine has a unique name, obtainable with the command: "uname -n". In Cortex it's not like that: you get "source" on all of the 5 cortex computers. So we copied the script to yarprunVislab.sh and changed a line as follows:
#ID=/`uname -n` ID=/cortex` ifconfig eth0 | grep "inet addr" | awk '{print $2}' | awk -F: '{print $2}' | awk -F. '{print $4}'`
of course, we had to make a copy of $ICUB_DIR/scripts/icub-cluster.py to $ICUB_DIR/scripts/icub-clusterVislab.py and change all the invocation of yarprun.sh to yarprunVislab.sh in the latter.