Message boards :
Number crunching :
Multiple Project Delegation
Message board moderation
Author | Message |
---|---|
Cyntech Send message Joined: 17 Apr 02 Posts: 21 Credit: 1,259,030 RAC: 0 |
Hi all, I have 3 machines working 3 projects and I was just wondering what's more efficient - having all three work all three projects, or two on each, or one project per machine? Regards, |
soft^spirit Send message Joined: 18 May 99 Posts: 6497 Credit: 34,134,168 RAC: 0 |
Hi all, Fair warning, if you run CPDN on the same computer as SETI, it WILL virtually take over the CPU portion. reason being they are incredibly LARGE crunches. running in excess of a week EACH. So while seti servers are taking a breath, they will slip you a couple that will keep you full. Janice |
Cyntech Send message Joined: 17 Apr 02 Posts: 21 Credit: 1,259,030 RAC: 0 |
Fair warning, if you run CPDN on the same computer as SETI, it WILL virtually take over the CPU portion. reason being they are incredibly LARGE crunches. running in excess of a week EACH. So while seti servers are taking a breath, they will slip you a couple that will keep you full. I'd noticed this. So, would it be better if I run CPDN on one machine dedicated and share the other two? |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
Fair warning, if you run CPDN on the same computer as SETI, it WILL virtually take over the CPU portion. reason being they are incredibly LARGE crunches. running in excess of a week EACH. So while seti servers are taking a breath, they will slip you a couple that will keep you full. I have not much experiences with more projects simultaneously. I guess he meant, if you would set all projects 33.3/33.3/33.3 % on each host, it could maybe happen that CPDN would crunch ~ 100 %. So only one host for CPDN would give the other project more chance. BTW. You could double the performance/RAC of your PCs, if you would install optimized project applications for SETI@home/AstroPulse. How to do, have a look in my profile there I have an easy 'quick instruction'. BTW#2. I would crunch with the Q6600 & 8800 GT 100 % SETI@home. ;-) Because I'm a 100 % SETIan. X-D |
tullio Send message Joined: 9 Apr 04 Posts: 8797 Credit: 2,930,782 RAC: 1 |
CPDN has very long deadlines. Mine is December 2011. So it won't occupy your CPU by going high priority. I am running 6 BOINC projects, including CPDN. Tullio |
Miep Send message Joined: 23 Jul 99 Posts: 2412 Credit: 351,996 RAC: 0 |
I have 3 machines working 3 projects and I was just wondering what's more efficient - having all three work all three projects, or two on each, or one project per machine? That depends on what you call 'efficient' ;) i.e. what kind of resource share/credit/balance you want to achieve. I'd probably set the dual core machine for CPDN with milkyway on small resource/backup (just in case) and share the two others. If milkyway takes a long time on the small one I might consider setting that to seti only - if you have consistent runtimes of >24h there you shouldn't have a problem keeping enough cache for the weekly outage... Running optimised applications will give you higher performance, but requires slightly 'advanced' computer skills. Also I strongly recommend setting boinc to no network and making a backup of the Boinc Data directory, prior to running the installer and having few tasks completed, so you don't lose your cache if you make a mistake... @tullio yes, but depending on cache settings you cen run into: I need some seti work, oh no seti, fine let's get some more CPDN instead and up the debt. I'm now stuffed, so I don't ask for work. Oh, I need more work, preferably seti, as I've got quite a debt by now. I can't get seti - let's get CPDN... Carola ------- I'm multilingual - I can misunderstand people in several languages! |
tullio Send message Joined: 9 Apr 04 Posts: 8797 Credit: 2,930,782 RAC: 1 |
@Carola I have a very small cache (0.25 days) and I get a new WU only when the preceding one has finished and is uploading. Since I have 6 projects I am never out of work. Tullio |
Bill Walker Send message Joined: 4 Sep 99 Posts: 3868 Credit: 2,697,267 RAC: 0 |
One of my machines runs both S@H and CPDN, and it seems to honour the shares I have set - at least in the long run. My CPDN work units run for months at a time, S@H multibeam units take a few days each. (It's an old slow machine.) CPDN ignores the published deadlines, and awards credits in small steps during the crunch. What s^s MAY have seen is BOINC trying to even out the share when you add a new project. If you add a second project and tell BOINC to split the two projects 50/50, for example, BOINC will run the new project much more than 50% until the long term average of work done gets close to 50/50. This could take weeks, or months. You need to walk away, enjoy a beverage, and come back and look at things after a few weeks. I have had problems with multiple projects during the SETI server downtimes, as s^s describes, but for me it is WCG that sees an empty computer and trys to fill it. It didn't help that a recent batch of their WUs came down the pipe with an estimated completion time of 3:45, and then took 9 to 12 hours each. Had both CPUs in panic mode for a few days doing WCG, and missed topping up for the SETI outrage two weeks ago. Getting back to the original question, I think the best answer to this problem is to have at least two projects on each machine. You may miss getting work for one project from time to time (S@H outrages, MW running out of work, etc.) but at least you will be getting some RAC on each machine. If you had only one project per machine, expect times when a machine may have nothing to do. |
Cyntech Send message Joined: 17 Apr 02 Posts: 21 Credit: 1,259,030 RAC: 0 |
CPDN has very long deadlines. Mine is December 2011. So it won't occupy your CPU by going high priority. I am running 6 BOINC projects, including CPDN. How do you have your resource share configured? |
tullio Send message Joined: 9 Apr 04 Posts: 8797 Credit: 2,930,782 RAC: 1 |
CPDN has very long deadlines. Mine is December 2011. So it won't occupy your CPU by going high priority. I am running 6 BOINC projects, including CPDN. All equal (100). |
Cyntech Send message Joined: 17 Apr 02 Posts: 21 Credit: 1,259,030 RAC: 0 |
One of my machines runs both S@H and CPDN, and it seems to honour the shares I have set - at least in the long run. My CPDN work units run for months at a time, S@H multibeam units take a few days each. (It's an old slow machine.) CPDN ignores the published deadlines, and awards credits in small steps during the crunch. How would you suggest setting the Resource share? Setting all to 100 and let it sort it out evenly? |
soft^spirit Send message Joined: 18 May 99 Posts: 6497 Credit: 34,134,168 RAC: 0 |
when I had CPDN set to 10%(seti 100) CPDN took an absolute monopoly of the CPU. I doubt 1% would be any different. Janice |
razamatraz Send message Joined: 23 Oct 07 Posts: 142 Credit: 27,815,748 RAC: 0 |
I leave CPDN on mine in babysit mode (No new units) with a resource share of 15 versus SETI 100, but with collatz at 1%. If SETI and collatz runs out of units i let CPDN grab a couple. They usually take about 5 - 10 days, but it switches them up with the SETI units somewhat. The resource share stuff is kind of non-applicable to GPUs as far as I know, but an 8800GT can't run milkyway anyways. I would put the Q6600 with the 8800GT to 100 SETI and maybe 10 Collatz, since that can back up the 8800 GT, feel free to leave MW and CPDN at a small percentage. The other two machines are significantly slower. you could easily run them wherever you want, 33,33,33 etc. |
soft^spirit Send message Joined: 18 May 99 Posts: 6497 Credit: 34,134,168 RAC: 0 |
CPDN does not use GPU. So for the moment, those are safe from it. Janice |
tullio Send message Joined: 9 Apr 04 Posts: 8797 Credit: 2,930,782 RAC: 1 |
CPDN is like a "basso continuo" in a concert by Bach. All other projects' WUs come and go but CPDN is rolling on like Mississipi. Tullio |
Bill Walker Send message Joined: 4 Sep 99 Posts: 3868 Credit: 2,697,267 RAC: 0 |
I think that in the long run (over several months) the CPU time allotted will more or less be divided as per your resource allocation. RAC is different however, since different projects will give you different RAC per CPU hour. So, the question is, are you interested in splitting up the science, or just going for RAC? Splitting science is a personal choice - whatever floats your boat as we say here. For straight CPU RAC, run a high share of MilkyWay with optimized aps, and keep a low share of S@H, also optimized. And maybe a low share of WCG or something, since MW and S@H have frequent outages, that have overlapped in the past. A third project will give you at least some credit when both the biggies are down. Also, if you are serious about RAC, consider running 24/7. Just keep an eye on temps. I have my laptop throttled to 80 % CPU usage, running 24/7, which gives a good RAC without me babysitting it. And give all this some weeks to sort out before you decide it isn't what you want. This is science, not a video game ;). Can't comment on GPUs, not in my budget. Others may have other ideas about best RAC. EDIT: looking at your RAC, I think you currently have a higher resource share for S@H then for MW. If you swap these shares, your RAC will eventually go up. |
Aurora Borealis Send message Joined: 14 Jan 01 Posts: 3075 Credit: 5,631,463 RAC: 0 |
The new CPDN WU have a deadlines set to 5 or 6 months. Set a low resource share and they immediately go into high priority mode. CPDN has never shared well on single core machines. Dual/Quad core with reasonable ratio do ok even with multiple projects. With my list of projects, it does need a slightly higher percentage on the Dual core. Boinc V7.2.42 Win7 i5 3.33G 4GB, GTX470 |
soft^spirit Send message Joined: 18 May 99 Posts: 6497 Credit: 34,134,168 RAC: 0 |
Bill, quick comment about gpu budgets.. the 9600 GT's are selling for about 50 USD. They do draw a bit more power, but it may run ok on existing power supplies. If that is too much or too bothersome, then you are right. I happen to always run a GPU whether crunching or not, and is the first thing I look for when computer replacement time comes. Janice |
Cyntech Send message Joined: 17 Apr 02 Posts: 21 Credit: 1,259,030 RAC: 0 |
To be quite honest, I've probably leaned towards going for RAC with a small helping of 'in it for the science'. My Resource share was set to: CPDN - 20 MW - 20 SETI - 100 Machines: P4 2 cores: CPDN 20 & SETI 100 P4 1 core: SETI 100 & MW 1 Quad Core: SETI 90 & MW 1 So, understanding what has been said so far, due to the outages SETI has, a better RAC is achieved from giving something other than SETI the majority of the resource share? So, I've reset the resource share of the three projects to 100 (although because of SETI's outage, it's not refreshing on my Quad core) and will use BAM! to control the Resource share on each machine. And: P4 2 cores: CPDN 90 & MW 10 P4 1 core: MW 67 & SETI 33 Quad core: MW 67 & SETI 33 Have I understood correctly? How does this look? |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
You might want to set "Extra work" to 4 days to work around the SETI outages. With just 2/14 of the computer, CPDN may not have enough resource share to avoid high priority mode. RAC should be determined by the efficiency of the computer on that particular type of task. It really depends on the computer. BOINC WIKI |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.