Questions and Answers :
Unix/Linux :
Seti@Home running in condor claster
Message board moderation
| Author | Message |
|---|---|
Csiga Send message Joined: 10 May 01 Posts: 2 Credit: 283,597,249 RAC: 22
|
Hello I have a question. I have a condor pool. Can I do run the seti@home in my condor claster system? Thank you |
|
Brett Fithall Send message Joined: 2 Jun 03 Posts: 1 Credit: 5,287,663 RAC: 0
|
I also have access to a condor cluster, as well as a 100 node HPCC cluster running Platform OCS/PBS. Is there a platform ocs role or instructions for pbs? Regards, Brett |
|
Dotsch Send message Joined: 9 Jun 99 Posts: 2422 Credit: 919,393 RAC: 0
|
I have readen in a other thread, that peoples has successfull distributed single BOINC instances via HPC mechanisms to the nodes and started a dedicated BOINC instance on each node. |
Csiga Send message Joined: 10 May 01 Posts: 2 Credit: 283,597,249 RAC: 22
|
Hi again I'm woking the condor problem. But. I have an another idea. So.. Step 1 I created the NFS home in all PC in the claster. NFS server: /etc/export: /home 10.0.1.0/255.255.255.0(rw,no_subtree_check,sync,no_root_squash) Workstation: /etc/ftsab: 10.0.1.1:/home /home nfs defaults,nolock 0 2 Mount, home or reboot workstation machine after add your user on /etc/passwd autghenticatin, or add LDAP+Kerberos, etc... Warning: Mind the permissions! Step 2 I created the ssh key.. ssh-keygen -t dsa ( The passphrase is emply! ) cp ~/.ssh/id_dsa.pub ~/.ssh/authorized_keys2 Step 3 mkdir ~/projects/nodes/HOSTNAME1 ( Sample: mkdir ~/projects/nodes/node75 ) mkdir ~/projects/nodes/HOSTNAME2 mkdir ~/projects/commands mkdir ~/projects/tools Step 4 ( My claster members is 14 ) If you have node host make this the all host cd ~/projects/nodes/HOSTNAME1 wget http://boincdl.ssl.berkeley.edu/dl/boinc_5.10.21_i686-pc-linux-gnu.sh sh boinc_5.10.21_i686-pc-linux-gnu.sh cp -v ./boinc_5.10.21_i686-pc-linux-gnu.sh ~/projects/tools/ All hosts! Step 5 cd ~/projects/nodes/HOSTNAME1/BOINC ./boinc --attach_project http://setiathome.berkeley.edu/ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx ( xxxxx*X = Account key ) All directory! Step 6 Wait the attact project proccess Step 7 CTRL+C Step 8 cd ~/projects/commands cat > boincctl << "EOF" #!/bin/bash # created: Csiga | http://setiathome.berkeley.edu/show_user.php?userid=53878 case $1 in start) echo "Starting BOINC nodes in claster..." ssh HOSTNAME1 ~/projects/nodes/HOSTNAME/BOINC/start.sh ssh HOSTNAME2 ~/projects/nodes/HOSTNAME/BOINC/start.sh ;; stop) echo "Stopping BOINC node in claster..." ssh HOSTNAME1 killall boinc ssh HOSTNAME2 killall boinc ;; status) echo "-------------------" echo "Process HOSTNAME1:" ssh HOSTNAME1 ~/projects/commands/run echo "-------------------" echo "Process HOSTNAME2:" ssh HOSTNAME2 ~/projects/commands/run echo "-------------------" ;; *) echo "Usage: $0 {start|stop|status}" exit 1 ;; esac EOF Step 9 chmod +x boincctl Step 10 Create the start.sh files in your hosts directoryes: cd cd ~/projects/nodes/HOSTNAME1/BOINC cat > start.hu << "EOF" cd "~/projects/nodes/HOSTNAME1/BOINC" && exec ./boinc --daemon $@ EOF chmod +x start.sh All host directory! Step 11 Make the "run" file: cat > run << "EOF" #!/bin/bash PS=/bin/ps GREP=/bin/grep SH=/bin/sh ECHO=/bin/echo $PS aux -www | $GREP setiathome | $GREP -q -v grep if [ $? = 0 ]; then echo "Running" else echo "Not running" fi EOF chmod + run Step 12 Lets go! Login the NFS server, or any node and: Starting: cd ~/projects/commands ./boincctl start Stopping: cd ~/projects/commands ./boincctl stop Status: cd ~/projects/commands ./boincctl status Good luck! |
©2026 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.