Multiple Project Delegation

Message boards : Number crunching : Multiple Project Delegation
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile Cyntech
Avatar

Send message
Joined: 17 Apr 02
Posts: 21
Credit: 1,259,030
RAC: 0
Australia
Message 1026027 - Posted: 18 Aug 2010, 6:51:47 UTC

Hi all,

I have 3 machines working 3 projects and I was just wondering what's more efficient - having all three work all three projects, or two on each, or one project per machine?

Regards,

ID: 1026027 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1026028 - Posted: 18 Aug 2010, 6:55:31 UTC - in response to Message 1026027.  

Hi all,

I have 3 machines working 3 projects and I was just wondering what's more efficient - having all three work all three projects, or two on each, or one project per machine?

Regards,


Fair warning, if you run CPDN on the same computer as SETI, it WILL virtually take over the CPU portion. reason being they are incredibly LARGE crunches. running in excess of a week EACH. So while seti servers are taking a breath, they will slip you a couple that will keep you full.
Janice
ID: 1026028 · Report as offensive
Profile Cyntech
Avatar

Send message
Joined: 17 Apr 02
Posts: 21
Credit: 1,259,030
RAC: 0
Australia
Message 1026030 - Posted: 18 Aug 2010, 7:00:22 UTC - in response to Message 1026028.  

Fair warning, if you run CPDN on the same computer as SETI, it WILL virtually take over the CPU portion. reason being they are incredibly LARGE crunches. running in excess of a week EACH. So while seti servers are taking a breath, they will slip you a couple that will keep you full.


I'd noticed this. So, would it be better if I run CPDN on one machine dedicated and share the other two?
ID: 1026030 · Report as offensive
Profile Sutaru Tsureku
Volunteer tester

Send message
Joined: 6 Apr 07
Posts: 7105
Credit: 147,663,825
RAC: 5
Germany
Message 1026066 - Posted: 18 Aug 2010, 11:07:03 UTC - in response to Message 1026030.  

Fair warning, if you run CPDN on the same computer as SETI, it WILL virtually take over the CPU portion. reason being they are incredibly LARGE crunches. running in excess of a week EACH. So while seti servers are taking a breath, they will slip you a couple that will keep you full.


I'd noticed this. So, would it be better if I run CPDN on one machine dedicated and share the other two?


I have not much experiences with more projects simultaneously.
I guess he meant, if you would set all projects 33.3/33.3/33.3 % on each host, it could maybe happen that CPDN would crunch ~ 100 %.
So only one host for CPDN would give the other project more chance.

BTW. You could double the performance/RAC of your PCs, if you would install optimized project applications for SETI@home/AstroPulse.
How to do, have a look in my profile there I have an easy 'quick instruction'.

BTW#2. I would crunch with the Q6600 & 8800 GT 100 % SETI@home. ;-)
Because I'm a 100 % SETIan. X-D

ID: 1026066 · Report as offensive
Profile tullio
Volunteer tester

Send message
Joined: 9 Apr 04
Posts: 8797
Credit: 2,930,782
RAC: 1
Italy
Message 1026070 - Posted: 18 Aug 2010, 11:31:53 UTC

CPDN has very long deadlines. Mine is December 2011. So it won't occupy your CPU by going high priority. I am running 6 BOINC projects, including CPDN.
Tullio
ID: 1026070 · Report as offensive
Profile Miep
Volunteer moderator
Avatar

Send message
Joined: 23 Jul 99
Posts: 2412
Credit: 351,996
RAC: 0
Message 1026079 - Posted: 18 Aug 2010, 11:58:36 UTC - in response to Message 1026027.  

I have 3 machines working 3 projects and I was just wondering what's more efficient - having all three work all three projects, or two on each, or one project per machine?


That depends on what you call 'efficient' ;) i.e. what kind of resource share/credit/balance you want to achieve.

I'd probably set the dual core machine for CPDN with milkyway on small resource/backup (just in case) and share the two others.

If milkyway takes a long time on the small one I might consider setting that to seti only - if you have consistent runtimes of >24h there you shouldn't have a problem keeping enough cache for the weekly outage...

Running optimised applications will give you higher performance, but requires slightly 'advanced' computer skills. Also I strongly recommend setting boinc to no network and making a backup of the Boinc Data directory, prior to running the installer and having few tasks completed, so you don't lose your cache if you make a mistake...

@tullio yes, but depending on cache settings you cen run into: I need some seti work, oh no seti, fine let's get some more CPDN instead and up the debt. I'm now stuffed, so I don't ask for work. Oh, I need more work, preferably seti, as I've got quite a debt by now. I can't get seti - let's get CPDN...


Carola
-------
I'm multilingual - I can misunderstand people in several languages!
ID: 1026079 · Report as offensive
Profile tullio
Volunteer tester

Send message
Joined: 9 Apr 04
Posts: 8797
Credit: 2,930,782
RAC: 1
Italy
Message 1026081 - Posted: 18 Aug 2010, 12:03:13 UTC

@Carola
I have a very small cache (0.25 days) and I get a new WU only when the preceding one has finished and is uploading. Since I have 6 projects I am never out of work.
Tullio
ID: 1026081 · Report as offensive
Profile Bill Walker
Avatar

Send message
Joined: 4 Sep 99
Posts: 3868
Credit: 2,697,267
RAC: 0
Canada
Message 1026082 - Posted: 18 Aug 2010, 12:06:10 UTC

One of my machines runs both S@H and CPDN, and it seems to honour the shares I have set - at least in the long run. My CPDN work units run for months at a time, S@H multibeam units take a few days each. (It's an old slow machine.) CPDN ignores the published deadlines, and awards credits in small steps during the crunch.

What s^s MAY have seen is BOINC trying to even out the share when you add a new project. If you add a second project and tell BOINC to split the two projects 50/50, for example, BOINC will run the new project much more than 50% until the long term average of work done gets close to 50/50. This could take weeks, or months. You need to walk away, enjoy a beverage, and come back and look at things after a few weeks.

I have had problems with multiple projects during the SETI server downtimes, as s^s describes, but for me it is WCG that sees an empty computer and trys to fill it. It didn't help that a recent batch of their WUs came down the pipe with an estimated completion time of 3:45, and then took 9 to 12 hours each. Had both CPUs in panic mode for a few days doing WCG, and missed topping up for the SETI outrage two weeks ago.

Getting back to the original question, I think the best answer to this problem is to have at least two projects on each machine. You may miss getting work for one project from time to time (S@H outrages, MW running out of work, etc.) but at least you will be getting some RAC on each machine. If you had only one project per machine, expect times when a machine may have nothing to do.

ID: 1026082 · Report as offensive
Profile Cyntech
Avatar

Send message
Joined: 17 Apr 02
Posts: 21
Credit: 1,259,030
RAC: 0
Australia
Message 1026097 - Posted: 18 Aug 2010, 12:57:01 UTC - in response to Message 1026070.  

CPDN has very long deadlines. Mine is December 2011. So it won't occupy your CPU by going high priority. I am running 6 BOINC projects, including CPDN.
Tullio


How do you have your resource share configured?
ID: 1026097 · Report as offensive
Profile tullio
Volunteer tester

Send message
Joined: 9 Apr 04
Posts: 8797
Credit: 2,930,782
RAC: 1
Italy
Message 1026101 - Posted: 18 Aug 2010, 13:13:43 UTC - in response to Message 1026097.  

CPDN has very long deadlines. Mine is December 2011. So it won't occupy your CPU by going high priority. I am running 6 BOINC projects, including CPDN.
Tullio


How do you have your resource share configured?

All equal (100).
ID: 1026101 · Report as offensive
Profile Cyntech
Avatar

Send message
Joined: 17 Apr 02
Posts: 21
Credit: 1,259,030
RAC: 0
Australia
Message 1026131 - Posted: 18 Aug 2010, 15:20:14 UTC - in response to Message 1026082.  

One of my machines runs both S@H and CPDN, and it seems to honour the shares I have set - at least in the long run. My CPDN work units run for months at a time, S@H multibeam units take a few days each. (It's an old slow machine.) CPDN ignores the published deadlines, and awards credits in small steps during the crunch.

...

Getting back to the original question, I think the best answer to this problem is to have at least two projects on each machine. You may miss getting work for one project from time to time (S@H outrages, MW running out of work, etc.) but at least you will be getting some RAC on each machine. If you had only one project per machine, expect times when a machine may have nothing to do.


How would you suggest setting the Resource share? Setting all to 100 and let it sort it out evenly?
ID: 1026131 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1026141 - Posted: 18 Aug 2010, 15:45:26 UTC

when I had CPDN set to 10%(seti 100)
CPDN took an absolute monopoly of the CPU. I doubt 1% would be any different.
Janice
ID: 1026141 · Report as offensive
Profile razamatraz

Send message
Joined: 23 Oct 07
Posts: 142
Credit: 27,815,748
RAC: 0
Canada
Message 1026154 - Posted: 18 Aug 2010, 16:34:57 UTC

I leave CPDN on mine in babysit mode (No new units) with a resource share of 15 versus SETI 100, but with collatz at 1%. If SETI and collatz runs out of units i let CPDN grab a couple. They usually take about 5 - 10 days, but it switches them up with the SETI units somewhat.

The resource share stuff is kind of non-applicable to GPUs as far as I know, but an 8800GT can't run milkyway anyways. I would put the Q6600 with the 8800GT to 100 SETI and maybe 10 Collatz, since that can back up the 8800 GT, feel free to leave MW and CPDN at a small percentage. The other two machines are significantly slower. you could easily run them wherever you want, 33,33,33 etc.
ID: 1026154 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1026156 - Posted: 18 Aug 2010, 16:45:06 UTC - in response to Message 1026154.  

CPDN does not use GPU. So for the moment, those are safe from it.
Janice
ID: 1026156 · Report as offensive
Profile tullio
Volunteer tester

Send message
Joined: 9 Apr 04
Posts: 8797
Credit: 2,930,782
RAC: 1
Italy
Message 1026160 - Posted: 18 Aug 2010, 16:57:12 UTC

CPDN is like a "basso continuo" in a concert by Bach. All other projects' WUs come and go but CPDN is rolling on like Mississipi.
Tullio
ID: 1026160 · Report as offensive
Profile Bill Walker
Avatar

Send message
Joined: 4 Sep 99
Posts: 3868
Credit: 2,697,267
RAC: 0
Canada
Message 1026162 - Posted: 18 Aug 2010, 17:00:10 UTC - in response to Message 1026131.  
Last modified: 18 Aug 2010, 17:04:25 UTC


Getting back to the original question, I think the best answer to this problem is to have at least two projects on each machine. You may miss getting work for one project from time to time (S@H outrages, MW running out of work, etc.) but at least you will be getting some RAC on each machine. If you had only one project per machine, expect times when a machine may have nothing to do.


How would you suggest setting the Resource share? Setting all to 100 and let it sort it out evenly?


I think that in the long run (over several months) the CPU time allotted will more or less be divided as per your resource allocation. RAC is different however, since different projects will give you different RAC per CPU hour.

So, the question is, are you interested in splitting up the science, or just going for RAC? Splitting science is a personal choice - whatever floats your boat as we say here. For straight CPU RAC, run a high share of MilkyWay with optimized aps, and keep a low share of S@H, also optimized. And maybe a low share of WCG or something, since MW and S@H have frequent outages, that have overlapped in the past. A third project will give you at least some credit when both the biggies are down.

Also, if you are serious about RAC, consider running 24/7. Just keep an eye on temps. I have my laptop throttled to 80 % CPU usage, running 24/7, which gives a good RAC without me babysitting it. And give all this some weeks to sort out before you decide it isn't what you want. This is science, not a video game ;).

Can't comment on GPUs, not in my budget. Others may have other ideas about best RAC.

EDIT: looking at your RAC, I think you currently have a higher resource share for S@H then for MW. If you swap these shares, your RAC will eventually go up.

ID: 1026162 · Report as offensive
Aurora Borealis
Volunteer tester
Avatar

Send message
Joined: 14 Jan 01
Posts: 3075
Credit: 5,631,463
RAC: 0
Canada
Message 1026255 - Posted: 18 Aug 2010, 23:27:35 UTC
Last modified: 18 Aug 2010, 23:35:14 UTC

The new CPDN WU have a deadlines set to 5 or 6 months. Set a low resource share and they immediately go into high priority mode.

CPDN has never shared well on single core machines. Dual/Quad core with reasonable ratio do ok even with multiple projects. With my list of projects, it does need a slightly higher percentage on the Dual core.

Boinc V7.2.42
Win7 i5 3.33G 4GB, GTX470
ID: 1026255 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1026257 - Posted: 18 Aug 2010, 23:34:55 UTC - in response to Message 1026162.  

Bill, quick comment about gpu budgets..

the 9600 GT's are selling for about 50 USD. They do draw a bit more power, but it may run ok on existing power supplies. If that is too much or too bothersome, then you are right. I happen to always run a GPU whether crunching or not, and is the first thing I look for when computer replacement time comes.
Janice
ID: 1026257 · Report as offensive
Profile Cyntech
Avatar

Send message
Joined: 17 Apr 02
Posts: 21
Credit: 1,259,030
RAC: 0
Australia
Message 1026260 - Posted: 18 Aug 2010, 23:45:31 UTC - in response to Message 1026162.  


Getting back to the original question, I think the best answer to this problem is to have at least two projects on each machine. You may miss getting work for one project from time to time (S@H outrages, MW running out of work, etc.) but at least you will be getting some RAC on each machine. If you had only one project per machine, expect times when a machine may have nothing to do.


How would you suggest setting the Resource share? Setting all to 100 and let it sort it out evenly?


I think that in the long run (over several months) the CPU time allotted will more or less be divided as per your resource allocation. RAC is different however, since different projects will give you different RAC per CPU hour.

So, the question is, are you interested in splitting up the science, or just going for RAC? Splitting science is a personal choice - whatever floats your boat as we say here. For straight CPU RAC, run a high share of MilkyWay with optimized aps, and keep a low share of S@H, also optimized. And maybe a low share of WCG or something, since MW and S@H have frequent outages, that have overlapped in the past. A third project will give you at least some credit when both the biggies are down.

Also, if you are serious about RAC, consider running 24/7. Just keep an eye on temps. I have my laptop throttled to 80 % CPU usage, running 24/7, which gives a good RAC without me babysitting it. And give all this some weeks to sort out before you decide it isn't what you want. This is science, not a video game ;).

Can't comment on GPUs, not in my budget. Others may have other ideas about best RAC.

EDIT: looking at your RAC, I think you currently have a higher resource share for S@H then for MW. If you swap these shares, your RAC will eventually go up.



To be quite honest, I've probably leaned towards going for RAC with a small helping of 'in it for the science'.

My Resource share was set to:

CPDN - 20
MW - 20
SETI - 100

Machines:

P4 2 cores: CPDN 20 & SETI 100
P4 1 core: SETI 100 & MW 1
Quad Core: SETI 90 & MW 1

So, understanding what has been said so far, due to the outages SETI has, a better RAC is achieved from giving something other than SETI the majority of the resource share?

So, I've reset the resource share of the three projects to 100 (although because of SETI's outage, it's not refreshing on my Quad core) and will use BAM! to control the Resource share on each machine.

And:
P4 2 cores: CPDN 90 & MW 10
P4 1 core: MW 67 & SETI 33
Quad core: MW 67 & SETI 33

Have I understood correctly? How does this look?
ID: 1026260 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 1026262 - Posted: 18 Aug 2010, 23:50:33 UTC - in response to Message 1026260.  


Getting back to the original question, I think the best answer to this problem is to have at least two projects on each machine. You may miss getting work for one project from time to time (S@H outrages, MW running out of work, etc.) but at least you will be getting some RAC on each machine. If you had only one project per machine, expect times when a machine may have nothing to do.


How would you suggest setting the Resource share? Setting all to 100 and let it sort it out evenly?


I think that in the long run (over several months) the CPU time allotted will more or less be divided as per your resource allocation. RAC is different however, since different projects will give you different RAC per CPU hour.

So, the question is, are you interested in splitting up the science, or just going for RAC? Splitting science is a personal choice - whatever floats your boat as we say here. For straight CPU RAC, run a high share of MilkyWay with optimized aps, and keep a low share of S@H, also optimized. And maybe a low share of WCG or something, since MW and S@H have frequent outages, that have overlapped in the past. A third project will give you at least some credit when both the biggies are down.

Also, if you are serious about RAC, consider running 24/7. Just keep an eye on temps. I have my laptop throttled to 80 % CPU usage, running 24/7, which gives a good RAC without me babysitting it. And give all this some weeks to sort out before you decide it isn't what you want. This is science, not a video game ;).

Can't comment on GPUs, not in my budget. Others may have other ideas about best RAC.

EDIT: looking at your RAC, I think you currently have a higher resource share for S@H then for MW. If you swap these shares, your RAC will eventually go up.



To be quite honest, I've probably leaned towards going for RAC with a small helping of 'in it for the science'.

My Resource share was set to:

CPDN - 20
MW - 20
SETI - 100

Machines:

P4 2 cores: CPDN 20 & SETI 100
P4 1 core: SETI 100 & MW 1
Quad Core: SETI 90 & MW 1

So, understanding what has been said so far, due to the outages SETI has, a better RAC is achieved from giving something other than SETI the majority of the resource share?

So, I've reset the resource share of the three projects to 100 (although because of SETI's outage, it's not refreshing on my Quad core) and will use BAM! to control the Resource share on each machine.

And:
P4 2 cores: CPDN 90 & MW 10
P4 1 core: MW 67 & SETI 33
Quad core: MW 67 & SETI 33

Have I understood correctly? How does this look?

You might want to set "Extra work" to 4 days to work around the SETI outages. With just 2/14 of the computer, CPDN may not have enough resource share to avoid high priority mode.

RAC should be determined by the efficiency of the computer on that particular type of task. It really depends on the computer.


BOINC WIKI
ID: 1026262 · Report as offensive
1 · 2 · Next

Message boards : Number crunching : Multiple Project Delegation


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.