again less credits?

Message boards : Number crunching : again less credits?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · Next

AuthorMessage
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 902620 - Posted: 1 Jun 2009, 22:14:28 UTC - in response to Message 902613.  


The one thing I know for sure is that my G3 iMac, which doesn't do any 'tricky' operations (floating point or otherwise, mostly because it can't), has gotten paid progressively less for doing the exact same work it always has for the last year and a half. Same thing is true for my MMX only hosts. :-(

How is that keeping within the definition of the Cobblestone?

It has been stated elsewhere that even SETI has historically been overpaying, and now that they're trying to actually tie credit to the Cobblestone, that probably tends to lower credit.

I think Richard may have something when he talks about the effects of CUDA.

I know that there are all kinds of issues with counting flops.

I don't know, and I can't get that excited about it.

My question is: do we bow to those demanding more credit, and turn the Cobblestone into "fiat money" or do we try to push toward a stable currency based on a measurable commodity?

There are so many things that work against that, including the fact that the median machine isn't running the same architecture as the mythical "100 cobblestone computer."


ID: 902620 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 902804 - Posted: 2 Jun 2009, 5:28:48 UTC - in response to Message 902532.  

Richard Haselgrove wrote:

Thanks for the script link. I've had a read of it - haven't teased out all the details, but the basic SQL stuff is standard enough.

It does look as if the fast CUDA hosts get 80, or 100, or 500 'lottery tickets' for entry into the long list of recent results, and hence stand a far better chance of being included in the reduced list.

Maybe, maybe not. I think the enumeration which gathers the 10000 results proceeds in host ID number order, and given that there are on the order of 950 thousand results waiting for purging I suspect no host with an ID above 2000000 is likely to be considered at all. While some users manage to upgrade their systems without getting a new ID, I think that the low numbered hosts probably don't include many with CUDA or the latest and greatest CPUs. That's bad statistics, but it's why I think the erosion we're seeing is mainly just older hosts retiring and thereby shifting the median up. My 200 MHz. Pentium MMX benches around 165 and 339, my 1.4 GHz. Pentium-M around 1230 and 2520 so the sums have a ratio around 7.44. For actual crunching the ratio is more like 29. And hosts with benchmarks twice my Pentium-M's are considerably more than twice as productive. It's a major flaw in the BOINC benchmark system.

Every time a host is double-entered in the long list, it reduces the diversity and representativeness of the short list.

And averaging CPU and CUDA tasks by "sum(granted_credit)/sum(cpu_time)" is just plain wrong.

Yes, but if a host is above the median it matters not how far it's above. Agreed, it's bad statistics, but for the purpose of this credit adjustment I don't think it hurts.

Do you know if anything representing plan_class is stored in the result table? I think it would be fair for the results my CUDA hosts return from the 603 CPU app to be included in the 10000 lottery, but results from the 608 CUDA app should be excluded from the average. That should still be OK for the VLARs which are sent out as 608 CUDA but I return as 603 CPU (not so sure about Raismer's Perl script, which can do the reverse transition).

The plan class is not in the result database, though the app_version_num is (the one the Scheduler told the host to use).

The only other alternative would be to exclude CUDA in the hosts table, on the grounds that there should still be enough MB work to get a reasonable average from 10000 results from non-CUDA hosts.

But then we'd be excluding CUDA hosts from the AP calculation too - needlessly, and the smaller number of results means that there probably would be distortion.

Hmmmm. I don't think Eric has thought all this through - and now that I've tried, I can see why not!

As long as CUDA hosts are a minority, their results will be above the median using the present system. If the BOINC devs get around to redoing everything in elapsed time terms, that might move some 8400GS or similar CUDA hosts down to the vicinity of the median, and I'm not sure that would be a good thing.
                                                                Joe
ID: 902804 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 902805 - Posted: 2 Jun 2009, 5:36:51 UTC - in response to Message 902619.  
Last modified: 2 Jun 2009, 5:37:39 UTC

A wee bit of a history lesson might be in order at this point.

Once upon a time, some guys at Berkeley came up with this BOINC thingy, and they thought it'd be good if they could bring in all the other projects and issue credit for work done, and have the credit be comparable between projects.

So if you got 50 credits on SETI and 50 credits on CPDN, it is because you had done equal work.

And Jeff said "how about defining a credit as 1/100th of the work a specific machine can do?" and they named it Cobblestone.

... and that is when all the problems started.

Initially, BOINC granted credit based on Benchmarks and Time because the Cobblestone is defined in terms of Benchmarks and Time.

I'd even suggest that this original scheme was the most accurate, since it did come right from the definition -- if you averaged it across a bunch of work.

It just wasn't very repeatable, and all we talked about back then was how one could claim "20" when the next cruncher claimed "50" and sometimes you got paid too well, and other times got cheated -- but it averaged out.

Now, we count FLOPs, which have no connection to the Cobblestone definition at all, and a scaling factor (2.85) is applied on Multibeam to try to bring the two into line.

Eric's script is trying to look at work, find a median, compare Benchmark * Time vs. FLOPs and slowly refine the scaling so it tracks, on average, back to the original "Gold Standard" cobblestone.

The big problem is that we (in the U.S. and probably most countries) don't really remember when most currencies were "hard currencies" and a dollar literally represented a specific amount of gold in a vault somewhere.



Say once upon a time I had a computer that met the "original Gold Standard". It used to get x number cobblestone a day.Now that they are finding the median,which now my old computer is way behind the curve,it will get less. One way to get rid of old computers I guess.:)
The computer is doing the same amount of work today that it did before the new mean went into affect today.Computers will have to be replaced to keep getting the same amount of credit.The more that are replaced, the more that will need to be replace to stay still on the credit uphill road.

Nope. All computers can be referenced to the gold standard, no computer is required to meet it.

Take the old credit system. Any computer under the old system would get credit based on the benchmark times the number of seconds.

So, let's say your "reference" machine did a 20 credit work-unit in about 1700 seconds. Using the FLOP count method, it wanted 22 credits.

You retire it and get a machine that goes ten times faster.

It does a 20 credit work unit in about 170 seconds (it's ten times, faster, right), and the FLOP count is the SAME so it wants 22 credits.

With either machine, we can calculate what would have been requested using Benchmark * Time, and what was requested using FLOPS, and using the (made up) numbers in the example, the multiplier would be 0.9.

That's how it'd work in a simple world. There are so many other factors that affect this (cache size, memory speed, processor efficiency, etc.) that we'll never have a perfect credit system.
ID: 902805 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 902844 - Posted: 2 Jun 2009, 9:18:26 UTC - in response to Message 902804.  

As long as CUDA hosts are a minority, their results will be above the median using the present system. If the BOINC devs get around to redoing everything in elapsed time terms, that might move some 8400GS or similar CUDA hosts down to the vicinity of the median, and I'm not sure that would be a good thing.
                                                                Joe

Recording elapsed time, and using it in this calculation, would be a help. But if they're going to do that, they also need to record and use some sort of benchmark figure for the GPU (even if just the estimate BOINC gets from the card specification at startup) - and that for the particular GPU used for the task, given that a single host can have multiple CUDA cards of different speeds.
ID: 902844 · Report as offensive
Larry256
Volunteer tester

Send message
Joined: 11 Nov 05
Posts: 25
Credit: 5,715,079
RAC: 8
United States
Message 903057 - Posted: 3 Jun 2009, 1:36:50 UTC - in response to Message 902805.  



That's how it'd work in a simple world. There are so many other factors that affect this (cache size, memory speed, processor efficiency, etc.) that we'll never have a perfect credit system.



We'll never have a Fair credit system.
ID: 903057 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 903100 - Posted: 3 Jun 2009, 4:06:59 UTC - in response to Message 903057.  



That's how it'd work in a simple world. There are so many other factors that affect this (cache size, memory speed, processor efficiency, etc.) that we'll never have a perfect credit system.



We'll never have a Fair credit system.

Depends on what you mean by fair.

If fair means equitable then any credit system that grants credit without discriminating or favoring any one group is "fair."

... and we had that. Trouble is, it wasn't highly repeatable -- the numbers varied a lot -- but it averaged out.

What we have now is more influenced by CPU architecture than the old "benchmark * time" system -- some processors are favored over others.
ID: 903100 · Report as offensive
Profile Conan
Volunteer tester
Avatar

Send message
Joined: 30 Aug 05
Posts: 15
Credit: 1,585,618
RAC: 0
Australia
Message 903249 - Posted: 3 Jun 2009, 14:41:26 UTC

From what I am seeing my AMD X2 4800+ machine used to get around 20 to 22 credits an hour running SETI, and I got this for a long while.
I have been noticing that the amount per WU has been dropping but have not worried too much till I did a calculation or two.

Now I currently am only getting 10 to 12 credits an hour.

This is most assuredly a large drop.

Whether it is produced by CUDA influence or SETI in general I have already reduced my meager input even further as running the standard application on my limited computer resources I need all the credit I can get and now I am not getting it.

It appears that without an optimised application or a CUDA GPU card it no longer pays to run SETI.

Even Rosetta which has always underpaid SETI, now pays more.
ID: 903249 · Report as offensive
Larry256
Volunteer tester

Send message
Joined: 11 Nov 05
Posts: 25
Credit: 5,715,079
RAC: 8
United States
Message 903288 - Posted: 3 Jun 2009, 18:20:09 UTC - in response to Message 903100.  



That's how it'd work in a simple world. There are so many other factors that affect this (cache size, memory speed, processor efficiency, etc.) that we'll never have a perfect credit system.



We'll never have a Fair credit system.

Depends on what you mean by fair.

If fair means equitable then any credit system that grants credit without discriminating or favoring any one group is "fair."

... and we had that. Trouble is, it wasn't highly repeatable -- the numbers varied a lot -- but it averaged out.

What we have now is more influenced by CPU architecture than the old "benchmark * time" system -- some processors are favored over others.



So your saying the same thing I am, the current system is unfair.Why can't everybody see that?
ID: 903288 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 903292 - Posted: 3 Jun 2009, 18:38:20 UTC - in response to Message 903288.  

So your saying the same thing I am, the current system is unfair.Why can't everybody see that?

Because as much as people try to, fairness can't be strictly defined. It's very much subjective.
Grant
Darwin NT
ID: 903292 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 903301 - Posted: 3 Jun 2009, 19:10:52 UTC - in response to Message 903288.  



That's how it'd work in a simple world. There are so many other factors that affect this (cache size, memory speed, processor efficiency, etc.) that we'll never have a perfect credit system.



We'll never have a Fair credit system.

Depends on what you mean by fair.

If fair means equitable then any credit system that grants credit without discriminating or favoring any one group is "fair."

... and we had that. Trouble is, it wasn't highly repeatable -- the numbers varied a lot -- but it averaged out.

What we have now is more influenced by CPU architecture than the old "benchmark * time" system -- some processors are favored over others.



So your saying the same thing I am, the current system is unfair.Why can't everybody see that?

No, I'm saying the opposite.

Everyone can, if they wish, run the optimal hardware for SETI, hardware that will produce the best credit per hour possible.

If you want to get the absolute maximum credit, there is enough information in the forums to do that.

... but, if you do a given workunit, and I do the same workunit, and we both return valid results, we will get exactly the same amount of credit.

Equal pay for equal work -- what could be more fair than that?

The odds of getting a good-paying work unit or a poor-paying work unit are the same for everyone. So the distribution of work is fair.

Everyone has the opportunity to favor one kind of work or another. There is no discrimination in multibeam vs. astropulse, so that's fair.

Your argument seems to be that credit should be more consistent and while I agree that's a good goal, it's not possible without a lot more accounting in the science application, and I'd rather not see the time wasted on accounting.

But it is obviously completely fair.
ID: 903301 · Report as offensive
Larry256
Volunteer tester

Send message
Joined: 11 Nov 05
Posts: 25
Credit: 5,715,079
RAC: 8
United States
Message 903302 - Posted: 3 Jun 2009, 19:29:19 UTC - in response to Message 903292.  

So your saying the same thing I am, the current system is unfair.Why can't everybody see that?

Because as much as people try to, fairness can't be strictly defined. It's very much subjective.


OK Let me reword it..
Can we have a credit system that is less influenced by CPU architecture than the old "benchmark * time" system?Maybe one that won't devalue the credits over time for the amount of work done.
If I have a computer that is doing a job today and get paid 5 whatevers.Next year for doing the same work it should get paid 5 whatevers,not 4.2 because a new computer did the same job in less time.The new computer should get paid 5.8 whatevers.
Some people want it the first way,and some want it the second way.

Whatevery way it is they need to be up front about it.Not saying, Well, we been over paying and need to fix it every 6-9 months.Then spread the word to most every other project and say they are now overpaying.It's getting old.

I need another drink.:)

lol
ID: 903302 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 903311 - Posted: 3 Jun 2009, 20:08:32 UTC - in response to Message 903302.  

So your saying the same thing I am, the current system is unfair.Why can't everybody see that?

Because as much as people try to, fairness can't be strictly defined. It's very much subjective.


OK Let me reword it..
Can we have a credit system that is less influenced by CPU architecture than the old "benchmark * time" system?Maybe one that won't devalue the credits over time for the amount of work done.
If I have a computer that is doing a job today and get paid 5 whatevers.Next year for doing the same work it should get paid 5 whatevers,not 4.2 because a new computer did the same job in less time.The new computer should get paid 5.8 whatevers.
Some people want it the first way,and some want it the second way.

Whatevery way it is they need to be up front about it.Not saying, Well, we been over paying and need to fix it every 6-9 months.Then spread the word to most every other project and say they are now overpaying.It's getting old.

I need another drink.:)

lol

If it was an easy problem, there would be an easy solution.

The easiest system would be to just count workunits, but the range of work to be done from the "shorties" through Astropulse is pretty large. Under BOINC, credit should be comparable across projects, and some projects have work that lasts a few minutes, up through CPDN.

Clock time would account for that, but there is no incentive for those with faster hosts. My C7-D (which will successfully finish an AP work unit in 200 hours -- optimized) would get paid at the same rate as Mark's cryogenically-accelerated i7.

A benchmark gives us a rough measure of performance. The flaw is in writing a benchmark that doesn't get mangled when BOINC is compiled by different compilers (GCC vs. MSVC), or even "improved" by updated compiler switches.

So, we try to count FLOPs, but we're ignoring the fact that a floating-point ADD is a lot faster than a floating-point COS() -- both count as "1" -- and we aren't counting each individual FLOP (which would double the number of instructions and slow things down) we're counting passes through some big loops, and they're probably estimating the number of FLOPs in libraries.

Counting FLOPs completely ignores memory architecture and speed, it ignores all of the non-floating point work, all of the flow control.

... and it ignores the fact that for Multibeam, the mix of "fast" and "slow" FLOPs are different for different angle ranges, which is why some angle ranges overpay, and why your RAC can take a dive if the telescope is doing certain studies (and moving in certain ways) that generate lots of "unfavorable" angle ranges. I don't pay that much attention to which is which.

It's possible to adjust the "FLOP" count to reflect how well a given CPU type does floating point, but it would vary by project. That also ignores the off-chip architecture.

So I think ultimately, you have to decide: do you get upset about the fact that the problem is far more complex than it seems it should be, or do you accept it, and if you really care about credits, take advantage of the credit system.

But I'm reminded of the Golgafrinchams, who (according to the Hitchhiker's Guide to the Galaxy) declared the leaf as their official currency. Trouble is, leaves weren't worth anything, so it took several deciduous forests to buy one ships' peanut.

A million BOINC credits and $5 will get you a cup of coffee at Starbucks, if you don't order something too fancy.

But it is still fair. Everyone has the same opportunity to maximize their return, to buy the best hardware, to use optimized apps. that do the work faster -- and we're all equally likely to get hit by the occasional old client or low credit claim.
ID: 903311 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 903314 - Posted: 3 Jun 2009, 20:15:02 UTC - in response to Message 903302.  


If I have a computer that is doing a job today and get paid 5 whatevers.Next year for doing the same work it should get paid 5 whatevers,not 4.2 because a new computer did the same job in less time.The new computer should get paid 5.8 whatevers.

I missed this in the first reading, and it's important.

Under both the benchmark * time system, and flop counting, if you do a work unit and get 5 cobblestones, and the new ultra-mega-go-fast does it ten times faster, it should get exactly 5 cobblestones.

They've done the same work, they are supposed to get exactly the same credit.

Counting FLOPs does this very accurately.

The machine going ten times faster will complete nine more work units while you're doing that one, it gets paid more because it did ten work units, and you did one.

That's the idea, that's the goal.

The problem is implementation. You can't really tell if you've got drift until you run for a while and see the drift.

If a project sees that whatever it is doing (new client, credit adjustment, whatever) has caused a drift upward, some will be angry because their old credits aren't worth as much as new work.

If a project sees that whatever it is doing (new client, credit adjustment, whatever) has caused a drift downward, some will be angry because their new credits aren't worth as much as old work.

In other words, there is no way to correct faults without drawing much criticism.
ID: 903314 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65747
Credit: 55,293,173
RAC: 49
United States
Message 903318 - Posted: 3 Jun 2009, 20:24:10 UTC - in response to Message 903314.  

Oh I do get It:

PC A(Not overclocked and No Cuda) gets paid for one WU per hour.
PC B(Is overclocked and has Cuda) gets paid for ten WU per hour.

Both PCs get paid the same amount per WU, It's just that PC B does more work and so It gets more work units done than PC A.

Mind You the PCs are proverbial PCs, Not actual PCs.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 903318 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 903324 - Posted: 3 Jun 2009, 20:36:28 UTC - in response to Message 903318.  

Oh I do get It:

PC A(Not overclocked and No Cuda) gets paid for one WU per hour.
PC B(Is overclocked and has Cuda) gets paid for ten WU per hour.

Both PCs get paid the same amount per WU, It's just that PC B does more work and so It gets more work units done than PC A.

Mind You the PCs are proverbial PCs, Not actual PCs.

They're proverbial in the sense that I pulled the numbers out of thin air.

... but I think it's a safe guess that for any given machine, we can find another that does either ten times the work, or one tenth of the work.

My slow cruncher took 15909.64 seconds to finish one Multibeam WU with the AK v.8 optimized build from Lunatics.

My wingman did the same workunit in 2,058.11 (with the stock client). It's a quad core, so probably four-at-a-time.

So, that Macintosh is something like 30 times faster? It gets paid 30 times in the time it takes "the slug" to get paid once?

I'm okay with that.
ID: 903324 · Report as offensive
Larry256
Volunteer tester

Send message
Joined: 11 Nov 05
Posts: 25
Credit: 5,715,079
RAC: 8
United States
Message 903380 - Posted: 3 Jun 2009, 23:50:46 UTC - in response to Message 903314.  

I missed this in the first reading, and it's important.

Under both the benchmark * time system, and flop counting, if you do a work unit and get 5 cobblestones, and the new ultra-mega-go-fast does it ten times faster, it should get exactly 5 cobblestones.

They've done the same work, they are supposed to get exactly the same credit.

Counting FLOPs does this very accurately.

The machine going ten times faster will complete nine more work units while you're doing that one, it gets paid more because it did ten work units, and you did one.

That's the idea, that's the goal..


I agree, same thing I was trying to say but better.

The problem is implementation. You can't really tell if you've got drift until you run for a while and see the drift.


We been running the new MB for how long now? I can't see how it would still be off after more than 3 months.The fact that cuda may be changing how much credit I get without it tells me somthing not right.(Not to bring up the benchmarks in the new ver. of BOINC that don't have anything to do with this at all.LOL)

If a project sees that whatever it is doing (new client, credit adjustment, whatever) has caused a drift upward, some will be angry because their old credits aren't worth as much as new work.

If a project sees that whatever it is doing (new client, credit adjustment, whatever) has caused a drift downward, some will be angry because their new credits aren't worth as much as old work.

In other words, there is no way to correct faults without drawing much criticism.


The credit are worth the same under both.The difference is under type 1 I would have to keep buying a new computer every so often to make the same amount.The credits in MY view should go up as new computer come online.I don't think that devalues the older credits,or the older computers that got them.Eveybody understands computers get faster all the time,so they should get more credits with every new computer.I can see in x amount of years with my new computer,the HAL9000 it spit out work so fast I would get over a billion credits a day,or the credits will keep being reworked and I'll still get the same as today.
ID: 903380 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65747
Credit: 55,293,173
RAC: 49
United States
Message 903419 - Posted: 4 Jun 2009, 1:35:11 UTC - in response to Message 903324.  

Oh I do get It:

PC A(Not overclocked and No Cuda) gets paid for one WU per hour.
PC B(Is overclocked and has Cuda) gets paid for ten WU per hour.

Both PCs get paid the same amount per WU, It's just that PC B does more work and so It gets more work units done than PC A.

Mind You the PCs are proverbial PCs, Not actual PCs.

They're proverbial in the sense that I pulled the numbers out of thin air.

... but I think it's a safe guess that for any given machine, we can find another that does either ten times the work, or one tenth of the work.

My slow cruncher took 15909.64 seconds to finish one Multibeam WU with the AK v.8 optimized build from Lunatics.

My wingman did the same workunit in 2,058.11 (with the stock client). It's a quad core, so probably four-at-a-time.

So, that Macintosh is something like 30 times faster? It gets paid 30 times in the time it takes "the slug" to get paid once?

I'm okay with that.

Macintosh? Only PCs here, I don't eat Apples or even own one for that matter. One's a quad and has a Cuda card and the Other is quad, But with no Cuda card, In any case I was talkin about a theoretical PC, My RAC is from the active PC, The other is off at the switch(I'd turned the PC off as I don't want to use It now for a few months, It came back on after the area suffered a power failure caused by lightning, So the PC is off at the switch now), At least until I can upgrade It more).
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 903419 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 903437 - Posted: 4 Jun 2009, 2:25:56 UTC - in response to Message 903380.  


The credit are worth the same under both.The difference is under type 1 I would have to keep buying a new computer every so often to make the same amount.The credits in MY view should go up as new computer come online.I don't think that devalues the older credits,or the older computers that got them.Eveybody understands computers get faster all the time,so they should get more credits with every new computer.I can see in x amount of years with my new computer,the HAL9000 it spit out work so fast I would get over a billion credits a day,or the credits will keep being reworked and I'll still get the same as today.

There is no reason to intentionally change the value of a credit. It is defined as:

Take a computer that benchmarks 1,000 double-precision MIPS based on the Whetstone benchmark, and 1,000 VAX MIPS based on the Dhrystone benchmark.

By definition, that computer should get exactly 100 credits per day.

That is the standard. Every combination of projects and work units should get 100 credits per day.

Credit adjustment is exceedingly unpopular. I would suggest that, while this is the right thing to do, that it any fix is likely to be nearly impossible without making a lot of users angry.

... as we've seen in this thread.
ID: 903437 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 903438 - Posted: 4 Jun 2009, 2:27:04 UTC - in response to Message 903419.  

Oh I do get It:

PC A(Not overclocked and No Cuda) gets paid for one WU per hour.
PC B(Is overclocked and has Cuda) gets paid for ten WU per hour.

Both PCs get paid the same amount per WU, It's just that PC B does more work and so It gets more work units done than PC A.

Mind You the PCs are proverbial PCs, Not actual PCs.

They're proverbial in the sense that I pulled the numbers out of thin air.

... but I think it's a safe guess that for any given machine, we can find another that does either ten times the work, or one tenth of the work.

My slow cruncher took 15909.64 seconds to finish one Multibeam WU with the AK v.8 optimized build from Lunatics.

My wingman did the same workunit in 2,058.11 (with the stock client). It's a quad core, so probably four-at-a-time.

So, that Macintosh is something like 30 times faster? It gets paid 30 times in the time it takes "the slug" to get paid once?

I'm okay with that.

Macintosh? Only PCs here, I don't eat Apples or even own one for that matter. One's a quad and has a Cuda card and the Other is quad, But with no Cuda card, In any case I was talkin about a theoretical PC, My RAC is from the active PC, The other is off at the switch(I'd turned the PC off as I don't want to use It now for a few months, It came back on after the area suffered a power failure caused by lightning, So the PC is off at the switch now), At least until I can upgrade It more).

As I said, I don't own the Macintosh. Someone else does.
ID: 903438 · Report as offensive
Terror Australis
Volunteer tester

Send message
Joined: 14 Feb 04
Posts: 1817
Credit: 262,693,308
RAC: 44
Australia
Message 903529 - Posted: 4 Jun 2009, 9:05:42 UTC
Last modified: 4 Jun 2009, 9:06:43 UTC

IT'S ALL A CONSPIRACY !
By decreasing the number of credits per unit, the admins are putting psychological pressure on us to increase our crunching power (ie faster computers, more CUDA cards etc.) in order to keep our RAC's up.

This then provides the project with more GFlops of crunching power so they can play "Mine's still bigger than yours" at BOINC project admins get togethers.

Mwahahahahaha
ID: 903529 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · Next

Message boards : Number crunching : again less credits?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.