Cluster Management in VisLab: Difference between revisions

From ISRWiki
Jump to navigation Jump to search
(add sections)
m (update link)
 
(10 intermediate revisions by the same user not shown)
Line 1: Line 1:
Cluster Manager ([http://eris.liralab.it/wiki/Cluster_management description] on LIRA-Lab wiki) is a Python-based GUI that lets the user check and influence the status of "yarp run" on a cluster of computers.
''This page is obsolete. Go back to [[iCub instructions]].''
 
Cluster Manager ([http://eris.liralab.it/wiki/Managing_Applications description] on LIRA-Lab wiki) is a Python-based GUI that lets a user check and influence the status of <code>yarprun</code> on a cluster of computers.


== Usage and configuration of Cluster Manager ==
== Usage and configuration of Cluster Manager ==


To run Cluster Manager, just go to $ICUB_DIR/app/default/scripts and run
To run Cluster Manager, just go to $ICUB_DIR/app/default/scripts and run
   ./icub-cluster.py  
   ./icub-cluster.py $ICUB_ROOT/app/$ICUB_ROBOTNAME/scripts/vislab-cluster.xml
by default, it reads configuration information from $ICUB_DIR/app/default/icub-cluster.xml (NOT SURE). We changed it to read configuration information from $ICUB_DIR/app/default/vislab-cluster.xml (TODO: make it so that we don't need to change icub-cluster.py, possibly by passing the XML filename as a command-line parameter).


We had to write these two lines to ~/.bash_env on the Cortex machines (user icub) and on Chico2 in order to have remote execution (i.e., non-interactive) with ssh to work:
We had to write these lines to ~/.bash_env on every machine (Cortex, iCubBrain, chico2, chico3 -- username icub), in order to have remote execution (i.e., non-interactive) with ssh to work:
   export ICUB_DIR=/home/icub/iCub
   export ICUB_DIR=/home/icub/iCub
   export ICUB_ROOT=$ICUB_DIR
   export ICUB_ROOT=$ICUB_DIR
  export ICUB_ROBOTNAME=iCubLisboa01


We had to create a /tmp/run directory on the Cortex machines (each one of them: these directories are different for each machine in the cluster!), belonging to user icub (it should be writable by anyone, but it probably isn't at the moment), so that "yarp run" could run there. I have no clue why this is needed: on pc104, such directory doesn't seem to exist.
We had to create a /tmp/run directory on the Cortex machines (each one of them: these directories are different for each machine in the cluster!), belonging to user icub (it should be writable by anyone, but it probably isn't at the moment), so that "yarp run" could run there. The reason why this is needed is not clear at the moment: on pc104, such directory doesn't seem to exist.
WARNING: the content of this directory might disappear when we restart the cortex computers, possibly causing problems.
WARNING: the content of this directory might disappear when we restart the Cortex computers, possibly causing problems.


== yarprun.sh script ==
== yarprun.sh script ==


The script $ICUB_DIR/scripts/yarprun.sh assumes that every machine has a unique name, obtainable with the command: "uname -n".
The script $ICUB_DIR/scripts/yarprun.sh assumes that every machine has a unique name, obtainable with the command: "uname -n". ''As of 2009-05-12, this works on Cortex as well; for an explanation of how we enforced the desired behaviour on the cluster, see page [[Cluster Management in VisLab/Archive]]''.
 
In Cortex it's not like that: you get "source" on all of the 5 cortex computers.
So we copied the script to yarprunVislab.sh and changed a line from:
  ID=/`uname -n`
to:
  ID=/`uname -n`
  if [ $ID == "/source" ];
  then
    ID=/cortex` ifconfig eth0 | grep "inet addr" | awk '{print $2}' | awk -F: '{print $2}' | awk -F. '{print $4}'`;
  fi;
of course, we had to make a copy of $ICUB_DIR/scripts/icub-cluster.py to $ICUB_DIR/scripts/icub-clusterVislab.py and change all the invocation of yarprun.sh to yarprunVislab.sh in the latter.
 
and we had to copy the yarprunVislab.sh on all of the machines (Chico2, pc104).
to copy it to pc104, we actually had to copy it to icubsrv, in the correct location (see [[pc104]]):
  scp yarprunVislab.sh 10.10.1.51:/exports/code-pc104/iCub/scripts/
 
it would be easier if we could have the cortex return "cortex1", etc., instead of source, when uname -n is run.
 
STATUS:
it is working: from Chico2 we can control "yarp run" on Chico2, pc104, Cortex1..5.
We didn't bother to have it running on Cortex6 nor icubsrv as those computers are very rarely used.


[[Category:Vislab]]
[[Category:Vislab]]

Latest revision as of 11:00, 25 May 2015

This page is obsolete. Go back to iCub instructions.

Cluster Manager (description on LIRA-Lab wiki) is a Python-based GUI that lets a user check and influence the status of yarprun on a cluster of computers.

Usage and configuration of Cluster Manager

To run Cluster Manager, just go to $ICUB_DIR/app/default/scripts and run

 ./icub-cluster.py $ICUB_ROOT/app/$ICUB_ROBOTNAME/scripts/vislab-cluster.xml

We had to write these lines to ~/.bash_env on every machine (Cortex, iCubBrain, chico2, chico3 -- username icub), in order to have remote execution (i.e., non-interactive) with ssh to work:

 export ICUB_DIR=/home/icub/iCub
 export ICUB_ROOT=$ICUB_DIR
 export ICUB_ROBOTNAME=iCubLisboa01

We had to create a /tmp/run directory on the Cortex machines (each one of them: these directories are different for each machine in the cluster!), belonging to user icub (it should be writable by anyone, but it probably isn't at the moment), so that "yarp run" could run there. The reason why this is needed is not clear at the moment: on pc104, such directory doesn't seem to exist. WARNING: the content of this directory might disappear when we restart the Cortex computers, possibly causing problems.

yarprun.sh script

The script $ICUB_DIR/scripts/yarprun.sh assumes that every machine has a unique name, obtainable with the command: "uname -n". As of 2009-05-12, this works on Cortex as well; for an explanation of how we enforced the desired behaviour on the cluster, see page Cluster Management in VisLab/Archive.