Questions and Answers :
Unix/Linux :
Beowolf
Message board moderation
Author | Message |
---|---|
Tare Send message Joined: 17 May 99 Posts: 4 Credit: 169,036 RAC: 0 |
I think that somebody has asked this allready but I ask again: Can Seti@Home run in Beowolf system so it runs like many CPU system (in 2 cores = Two Seti@homes, 1 mother + 3 child = 4 or 8 if there is 2 cores or 16 if quad or 2x4Cores *4 = 32...). I want to build small Beowolf system to test and learn about Linux using (I have 3 smaller machines what I can rip to this project) but I have no other science projects (I have been loyal to Seti@Home from begining :) so it is only right thing to do... This Microwolf system : http://www.calvin.edu/news/releases/2007-08/microwulf.htm seems nice hardware system but I have no money to put new hardware so I do it from old different pieces :-) - Tare |
Dotsch Send message Joined: 9 Jun 99 Posts: 2422 Credit: 919,393 RAC: 0 |
I think, there is no great benefit by using beowulf to distribute the work over the cluster systems. The BOINC client has his own scheduling mechanism and was not designed for such clusters like beowulf. A beowulf implementaion would so be complicated. So I think, it is better to install a BOINC client on each Linux system and start the BOINC client at boot time. |
Tare Send message Joined: 17 May 99 Posts: 4 Credit: 169,036 RAC: 0 |
Yes it can be better thing to to but I like to try Beowulf system because I have 4 small (useless:-) pc's on my corner (or 3 and one motherboard+CPU+RAM+power without hd and...) and I want to keep power user as small as I can, no HD&CD&GPU. Of corse it is easy to put all 4 to running from harddriver but I want also to learn to use Linux and also clustering. I'am so Windows user than it can be (from about middle of 90's, before 100% Amiga user, Windows sucks but I'am 100% with it now and can't get rid of it :). - Tare |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
The BOINC source software would have to be completely rewritten to be compatible with this special type of system. It's not that it can't be done, but that it ruins the whole point of distributed computing. The whole idea behind distributed computing is to take all the 'spare' cycles from all the computers around the world and make them into a 'sort-of' super computer. If someone builds a super computer to run this stuff, then it's not really using the 'spare' cycles of the average machine anymore. The basic premise behind distributed computing was cost. Sure, SETI@Home (and other projects) can use super computers or Beowulf clusters, but at what cost? Most science projects that are math intensive simply need lots of computing power but they don't have the funding to afford such super computers or Beowulf clusters, so they invented distributed computing under the theory that all the world's computer's spare clock cycles would be more powerful than even the world's most powerful super computer. For this reason alone, I don't believe the developers will be looking into making a 'cluster' version of the BOINC software. But, since it is open source software, there's nothing preventing talented people out there to do the modifications themselves (but per the open source license, you must share the source code of your work when requested). |
Dotsch Send message Joined: 9 Jun 99 Posts: 2422 Credit: 919,393 RAC: 0 |
Yes it can be better thing to to but I like to try Beowulf system because I have 4 small (useless:-) pc's on my corner (or 3 and one motherboard+CPU+RAM+power without hd and...) and I want to keep power user as small as I can, no HD&CD&GPU. Of corse it is easy to put all 4 to running from harddriver but I want also to learn to use Linux and also clustering. I'am so Windows user than it can be (from about middle of 90's, before 100% Amiga user, Windows sucks but I'am 100% with it now and can't get rid of it :). It is no problem to install a diskless Linux cluster and boot all your nodes over network from one single PC and install for each single node a indipendend BOINC instance. There was some postings about this toppic in the past, including some howtos. The concept of BOINC does it make complicated to let it running under a HPC cluster software, so I think that you have better success to use a other application for testing your cluster. But, you can also use BOINC and beowulf with your test application parallel. I have installed beowulf in the SETI classic days, the distribution with beowolf did not make benefits for a single threaded application. |
Tare Send message Joined: 17 May 99 Posts: 4 Credit: 169,036 RAC: 0 |
I'am not that good C++ programmer to to that like changes to Seti@Home (but that's someone else's job to do:) and I had been using client from as start as it was possible so I now what Seti@Home tryes to do with client computer but I didn't want to do calculatin so is makes one calculating in all nodes (1/4=4 times faster), I mean that if I get system to clustering and I run S@H on node 0 (Nodes are 0-3) so do S@H sees other nodes like other CPU so it can run it 4/4 (slower but 4 in same time in node CPU&RAM and ther is no need to useless hardware)... I have no other scientic what I want to test cluster system (it will be only test about clustering, it will not be running 24/7/365) and system will not be as powerfull as 'real' clusters. Heh, I try learn something new every month (I get rush sometime, 2003 it was Earthships http://koti.mbnet.fi/earths1/ , summer it was fixing car, end of july it was barbie house to my little girl, last month it was fixing car panels). Pictures about 'projects' are in with blog : http://www.myspace.com/tare69 . - Tare |
Dotsch Send message Joined: 9 Jun 99 Posts: 2422 Credit: 919,393 RAC: 0 |
I'am not that good C++ programmer to to that like changes to Seti@Home (but that's someone else's job to do:) and I had been using client from as start as it was possible so I now what Seti@Home tryes to do with client computer but I didn't want to do calculatin so is makes one calculating in all nodes (1/4=4 times faster), I mean that if I get system to clustering and I run S@H on node 0 (Nodes are 0-3) so do S@H sees other nodes like other CPU so it can run it 4/4 (slower but 4 in same time in node CPU&RAM and ther is no need to useless hardware)... This what you want to do has no improvment to the progressing perfomance of the BOINC applications and needs a lot of code rewriting, so I think this would not implemented. You would have the same performance without beowulf, if you install your nodes diskless, as I have suggested in my last posting, and make four indipenend BOINC instances for each node. I have no other scientic what I want to test cluster system (it will be only test about clustering, it will not be running 24/7/365) and system will not be as powerfull as 'real' clusters. You need a lot of C/C++, compiling and Unix knowledge if you would bring a HPC cluster to work. This is why I have recommended to you, if you would learn HPC clusters, to pick up some sample beowulf test and demo applications and let them running at your cluster. |
Tare Send message Joined: 17 May 99 Posts: 4 Credit: 169,036 RAC: 0 |
This what you want to do has no improvment to the progressing perfomance of the BOINC applications and needs a lot of code rewriting, so I think this would not implemented. I didn't want to do any changes to S@H (and I don't do, so no C/C++ programming) I only want to do easy cluster (do me that is easy to do because now days there is many ISO setup packets where to start and some scripting & address setups to write, so no need to build again programs). Of corse I test cluster with programs what is in ISO packets (like http://clusterknoppix.sw.be/ and so on) and that computing power is not so good for Bonk or to other power hungry application, I only want to see if I can do it :-) (that cluster I mean, if Seti works then better...) |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.