Cluster Management in VisLab: Difference between revisions
No edit summary |
No edit summary |
||
Line 16: | Line 16: | ||
The scripts yarprun.sh assumes every machine has a unique name, obtainable with the command: "uname -n". | The scripts yarprun.sh assumes every machine has a unique name, obtainable with the command: "uname -n". | ||
In Cortex it's not like that: you get "source" on all of the 5 cortex computers. | In Cortex it's not like that: you get "source" on all of the 5 cortex computers. | ||
So we copied the script to yarprunVislab.sh and changed a line | So we copied the script to yarprunVislab.sh and changed a line from: | ||
ID=/`uname -n` | |||
to: | |||
ID=/`uname -n` | |||
if [ $ID == "/source" ]; | |||
then | |||
ID=/cortex` ifconfig eth0 | grep "inet addr" | awk '{print $2}' | awk -F: '{print $2}' | awk -F. '{print $4}'`; | |||
fi; | |||
of course, we had to make a copy of $ICUB_DIR/scripts/icub-cluster.py to $ICUB_DIR/scripts/icub-clusterVislab.py and change all the invocation of yarprun.sh to yarprunVislab.sh in the latter. | of course, we had to make a copy of $ICUB_DIR/scripts/icub-cluster.py to $ICUB_DIR/scripts/icub-clusterVislab.py and change all the invocation of yarprun.sh to yarprunVislab.sh in the latter. | ||
and we had to copy the yarprunVislab.sh on all of the machines (Chico2, pc104, NOT DONE YET). | |||
it would be easier if we could have the cortex return "cortex1", etc., instead of source, when uname -n is run. |
Revision as of 22:38, 7 May 2009
Cluster Manager is a python-based GUI that lets the user check and influence the status of "yarp run" on a cluster of computers.
To run Cluster Manager, just go to $ICUB_DIR/app/default/scripts and run
./icub-cluster.py
by default, it reads configuration information from $ICUB_DIR/app/default/icub-cluster.xml (I GUESS, NOT SURE). We changed it to read configuration information from $ICUB_DIR/app/default/vislab-cluster.xml.
We had to write these two lines to ~/.bash_env on the Cortex machines (user icub) and on Chico2 in order to have remote execution with ssh to work:
export ICUB_DIR=/home/icub/iCub export ICUB_ROOT=$ICUB_DIR
We had to create a /tmp/run directory on the Cortex machines (each one of them: these directories are different for each one), belonging to user icub (it should be writable by anyone, but I guess it's not at the moment), so that "yarp run" could run on those machines. I have no clue why this is needed: on pc104 such directory doesn't seem to exist. WARNING: the content of this directory might disappear when we restart the cortex computers, possibly causing problems.
The scripts yarprun.sh assumes every machine has a unique name, obtainable with the command: "uname -n". In Cortex it's not like that: you get "source" on all of the 5 cortex computers. So we copied the script to yarprunVislab.sh and changed a line from:
ID=/`uname -n`
to: ID=/`uname -n` if [ $ID == "/source" ];
then ID=/cortex` ifconfig eth0 | grep "inet addr" | awk '{print $2}' | awk -F: '{print $2}' | awk -F. '{print $4}'`;
fi; of course, we had to make a copy of $ICUB_DIR/scripts/icub-cluster.py to $ICUB_DIR/scripts/icub-clusterVislab.py and change all the invocation of yarprun.sh to yarprunVislab.sh in the latter.
and we had to copy the yarprunVislab.sh on all of the machines (Chico2, pc104, NOT DONE YET).
it would be easier if we could have the cortex return "cortex1", etc., instead of source, when uname -n is run.