again less credits?


log in

Advanced search

Message boards : Number crunching : again less credits?

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · Next
Author Message
Terror Australis
Volunteer tester
Send message
Joined: 14 Feb 04
Posts: 1667
Credit: 203,501,593
RAC: 25,071
Australia
Message 903529 - Posted: 4 Jun 2009, 9:05:42 UTC
Last modified: 4 Jun 2009, 9:06:43 UTC

IT'S ALL A CONSPIRACY !
By decreasing the number of credits per unit, the admins are putting psychological pressure on us to increase our crunching power (ie faster computers, more CUDA cards etc.) in order to keep our RAC's up.

This then provides the project with more GFlops of crunching power so they can play "Mine's still bigger than yours" at BOINC project admins get togethers.

Mwahahahahaha

WinterKnight
Volunteer tester
Send message
Joined: 18 May 99
Posts: 8502
Credit: 23,078,298
RAC: 15,445
United Kingdom
Message 903540 - Posted: 4 Jun 2009, 10:18:57 UTC - in response to Message 903529.

IT'S ALL A CONSPIRACY !
By decreasing the number of credits per unit, the admins are putting psychological pressure on us to increase our crunching power (ie faster computers, more CUDA cards etc.) in order to keep our RAC's up.

This then provides the project with more GFlops of crunching power so they can play "Mine's still bigger than yours" at BOINC project admins get togethers.

Mwahahahahaha

The last thing you want is more CUDA cards if you want the MB credits to return to the 'correct', (cpu only) level.
AP _V5 credits have remained steady, within 3% since its introduction. But over a slightly longer period since the introduction of CUDA MB credits have fallen by ~15%.

Now I'm not saying CUDA is a bad idea. I actually think it is a good idea, but it has been let down by BOINC. The BOINC devs probably knew CUDA apps were on their way months before they appeared and yet over 6 months after the Seti CUDA app was released the BOINC client still has problems. The problems, as far as credits are concerned, total inability to report gpu time and differing Flops count for same task completed on gpu compared to cpu.
Add in the other BOINC problems of suspended tasks and inability to d/load correct number of tasks for cache size, reports of overheating and failure of CUDA cards, and you soon realise why, unless you have the time to observe and micro-manage Seti/BOINC, it is probably not the time introduce more CUDA cards.

We do have three CUDA capably cards, one in the E6600 and two in the Q9450 computers but they are my sons gaming machines. Disabling the gaming function for any reason is not an option. Youngest son is about 6 inches taller than me.

Betting Slip
Send message
Joined: 25 Jul 00
Posts: 89
Credit: 716,008
RAC: 0
United Kingdom
Message 903541 - Posted: 4 Jun 2009, 10:40:20 UTC - in response to Message 903540.

Disabling the gaming function for any reason is not an option. Youngest son is about 6 inches taller than me.


Seems like a good call :)
____________

Profile Ageless
Avatar
Send message
Joined: 9 Jun 99
Posts: 12258
Credit: 2,550,037
RAC: 620
Netherlands
Message 903549 - Posted: 4 Jun 2009, 11:06:18 UTC - in response to Message 902600.
Last modified: 4 Jun 2009, 11:28:48 UTC

Putting the cat among the pigeons...

Once upon a time, some guys at Berkeley came up with this BOINC thingy, and they thought it'd be good if they could bring in all the other projects and issue credit for work done, and have the credit be comparable between projects.

So if you got 50 credits on SETI and 50 credits on CPDN, it is because you had done equal work.

This was never formally written down anywhere, it was only informally asked that projects followed the same 'rules for giving out credit'. It's still not a demand, just a request.

So... it could be that you get 50 credits for Seti, but 33,333 for a whole other project, just because they not necessarily want to 'play ball'.

Still pigeons alive? More cats in then.
Here's what Rom Walton had to say about it when I asked about the magic credits:

Rom Walton wrote:
On a technical level they can hand out whatever amount of credit they want.

The whole credit normalization thing is the self governing part of the community among the projects kicking in.

Let us say that over the course of 5 years you had acquired 500,000 credits with project A, then project B comes along and hands that out in a task. Would that make you feel good about all the time and effort you put into project A?

Right now there is an informal agreement among the projects to try and keep things in check. I suspect that if that agreement ever fell apart, it would then fall on the stats sights to decide which projects were worthy of tracking credit accumulation. By that I mean if I setup my own project tomorrow and granted myself 10,000,000 credits, it doesn't mean anything until it shows up on a stats site and can affect my teams placement in ranks. Or my own for that matter.

____________
Jord

Fighting for the correct use of the apostrophe, together with Weird Al Yankovic

EPG
Send message
Joined: 3 Apr 99
Posts: 110
Credit: 10,405,863
RAC: 0
Hungary
Message 903555 - Posted: 4 Jun 2009, 11:26:18 UTC - in response to Message 903540.


We do have three CUDA capably cards, one in the E6600 and two in the Q9450 computers but they are my sons gaming machines. Disabling the gaming function for any reason is not an option. Youngest son is about 6 inches taller than me.


You have to remember to David vs. Goliath and use the exclusive_app cc_config option :D

____________

Larry256
Volunteer tester
Send message
Joined: 11 Nov 05
Posts: 25
Credit: 868,703
RAC: 18
United States
Message 903583 - Posted: 4 Jun 2009, 13:22:11 UTC - in response to Message 903549.
Last modified: 4 Jun 2009, 13:26:13 UTC

Putting the cat among the pigeons...

Once upon a time, some guys at Berkeley came up with this BOINC thingy, and they thought it'd be good if they could bring in all the other projects and issue credit for work done, and have the credit be comparable between projects.

So if you got 50 credits on SETI and 50 credits on CPDN, it is because you had done equal work.

This was never formally written down anywhere, it was only informally asked that projects followed the same 'rules for giving out credit'. It's still not a demand, just a request.

So... it could be that you get 50 credits for Seti, but 33,333 for a whole other project, just because they not necessarily want to 'play ball'.

Still pigeons alive? More cats in then.
Here's what Rom Walton had to say about it when I asked about the magic credits:

Rom Walton wrote:
On a technical level they can hand out whatever amount of credit they want.

The whole credit normalization thing is the self governing part of the community among the projects kicking in.

Let us say that over the course of 5 years you had acquired 500,000 credits with project A, then project B comes along and hands that out in a task. Would that make you feel good about all the time and effort you put into project A?

Right now there is an informal agreement among the projects to try and keep things in check. I suspect that if that agreement ever fell apart, it would then fall on the stats sights to decide which projects were worthy of tracking credit accumulation. By that I mean if I setup my own project tomorrow and granted myself 10,000,000 credits, it doesn't mean anything until it shows up on a stats site and can affect my teams placement in ranks. Or my own for that matter.


That just means that if they can't preswade the projects,They'll will preswade the stat sights.Thats the collectives way.

Edit 1 time

Larry256
Volunteer tester
Send message
Joined: 11 Nov 05
Posts: 25
Credit: 868,703
RAC: 18
United States
Message 903586 - Posted: 4 Jun 2009, 13:44:48 UTC - in response to Message 903437.


The credit are worth the same under both.The difference is under type 1 I would have to keep buying a new computer every so often to make the same amount.The credits in MY view should go up as new computer come online.I don't think that devalues the older credits,or the older computers that got them.Eveybody understands computers get faster all the time,so they should get more credits with every new computer.I can see in x amount of years with my new computer,the HAL9000 it spit out work so fast I would get over a billion credits a day,or the credits will keep being reworked and I'll still get the same as today.

There is no reason to intentionally change the value of a credit. It is defined as:

Take a computer that benchmarks 1,000 double-precision MIPS based on the Whetstone benchmark, and 1,000 VAX MIPS based on the Dhrystone benchmark.

By definition, that computer should get exactly 100 credits per day.

That is the standard. Every combination of projects and work units should get 100 credits per day.

Credit adjustment is exceedingly unpopular. I would suggest that, while this is the right thing to do, that it any fix is likely to be nearly impossible without making a lot of users angry.

... as we've seen in this thread.


SO if my HAL9000 benchmarked 10,000,000,000 double-precision MIPS Whetstone benchmark,and 10,000,000,000 VAX MIPS based on the Dhrystone benchmark.
Would you have no problem with the insane amount of credit that it would do in a day?
Or would you want it to get the mean of all computers at that time?
Then years later when a new computer came out and was twice a fast has HAL would I get less because HAL is behind the mean?

1mp0£173
Volunteer tester
Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 903623 - Posted: 4 Jun 2009, 15:51:12 UTC - in response to Message 903586.


There is no reason to intentionally change the value of a credit. It is defined as:

Take a computer that benchmarks 1,000 double-precision MIPS based on the Whetstone benchmark, and 1,000 VAX MIPS based on the Dhrystone benchmark.

By definition, that computer should get exactly 100 credits per day.

That is the standard. Every combination of projects and work units should get 100 credits per day.

Credit adjustment is exceedingly unpopular. I would suggest that, while this is the right thing to do, that it any fix is likely to be nearly impossible without making a lot of users angry.

... as we've seen in this thread.


SO if my HAL9000 benchmarked 10,000,000,000 double-precision MIPS Whetstone benchmark,and 10,000,000,000 VAX MIPS based on the Dhrystone benchmark.
Would you have no problem with the insane amount of credit that it would do in a day?
Or would you want it to get the mean of all computers at that time?
Then years later when a new computer came out and was twice a fast has HAL would I get less because HAL is behind the mean?

Larry,

You're repeatedly missing one very important point.

The credit adjustment finds the median from the sample and calculates the credit based on "benchmark * time" and slowly adjusts the FLOPs-based score to match what "benchmark * time" would have given.

If the fleet gets twice as fast, the "middle" gets faster, benchmark will double but the time will be half, and the number is the same.

That calculation is then used to adjust the FLOPs calculation.

Cobblestones don't change just because the median credit changes. The standard is very concrete.

I've repeatedly said "a cobblestone is 1/100th of the daily output of a machine with these characteristics" -- the term "mean" or "median" does not appear in that sentence.

I'm fine with your "HAL 9000" getting a billion credits per day. You should worry about it opening the pod bay doors.

-- Ned
____________

zoom314
Avatar
Send message
Joined: 30 Nov 03
Posts: 45782
Credit: 36,406,399
RAC: 7,405
Message 903625 - Posted: 4 Jun 2009, 15:54:50 UTC - in response to Message 903623.


There is no reason to intentionally change the value of a credit. It is defined as:

Take a computer that benchmarks 1,000 double-precision MIPS based on the Whetstone benchmark, and 1,000 VAX MIPS based on the Dhrystone benchmark.

By definition, that computer should get exactly 100 credits per day.

That is the standard. Every combination of projects and work units should get 100 credits per day.

Credit adjustment is exceedingly unpopular. I would suggest that, while this is the right thing to do, that it any fix is likely to be nearly impossible without making a lot of users angry.

... as we've seen in this thread.


SO if my HAL9000 benchmarked 10,000,000,000 double-precision MIPS Whetstone benchmark,and 10,000,000,000 VAX MIPS based on the Dhrystone benchmark.
Would you have no problem with the insane amount of credit that it would do in a day?
Or would you want it to get the mean of all computers at that time?
Then years later when a new computer came out and was twice a fast has HAL would I get less because HAL is behind the mean?

Larry,

You're repeatedly missing one very important point.

The credit adjustment finds the median from the sample and calculates the credit based on "benchmark * time" and slowly adjusts the FLOPs-based score to match what "benchmark * time" would have given.

If the fleet gets twice as fast, the "middle" gets faster, benchmark will double but the time will be half, and the number is the same.

That calculation is then used to adjust the FLOPs calculation.

Cobblestones don't change just because the median credit changes. The standard is very concrete.

I've repeatedly said "a cobblestone is 1/100th of the daily output of a machine with these characteristics" -- the term "mean" or "median" does not appear in that sentence.

I'm fine with your "HAL 9000" getting a billion credits per day. You should worry about it opening the pod bay doors.

-- Ned

And while not wearing a spacesuit. ;)

I'd have thought this less credit thing was dead, I guess I might need a space suit.
____________

1mp0£173
Volunteer tester
Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 903627 - Posted: 4 Jun 2009, 15:59:13 UTC - in response to Message 903540.

Now I'm not saying CUDA is a bad idea. I actually think it is a good idea, but it has been let down by BOINC. The BOINC devs probably knew CUDA apps were on their way months before they appeared and yet over 6 months after the Seti CUDA app was released the BOINC client still has problems.

As I understand it, Nvidia did just enough to port one of the two SETI science applications to CUDA, and then the developers at Nvidia were assigned to other projects.

That may even mean that when Multibeam is updated (they've turned up the sensitivity more than once during the past decade) that the CUDA application will disappear.

It's probably to early to call, but if the CUDA App. becomes abandonware, it's probably on Nvidia.

We also kinda forget that BOINC is a development project too, and they have to answer to a lot of projects, not just SETI -- and one that has to cater to the projects as well as the users. Lots of useful work is being done successfully, so it's important to test thoroughly before releasing new versions, and testing takes time.
____________

Profile Geek@Play
Volunteer tester
Avatar
Send message
Joined: 31 Jul 01
Posts: 2463
Credit: 85,113,949
RAC: 16,047
United States
Message 903669 - Posted: 4 Jun 2009, 17:59:13 UTC

I have come to believe that a person's RAC can go up and down naturally depending on the data being crunched and mainly on what the telescope was doing when the Seti data was recorded. Nothing we can do about it, it just happens. And those of us who crunch the same configuration for months and months at a time can see this.

I have also observed that when everyone's RAC is going up, nobody complains or reports it. When everyone RAC starts to go down, due to the nature of the data being crunched, then more complaints show up here.

That's just life with Boinc/Seti and distributed computing.

____________
Boinc....Boinc....Boinc....Boinc....

Profile perryjay
Volunteer tester
Avatar
Send message
Joined: 20 Aug 02
Posts: 3377
Credit: 14,894,844
RAC: 11,784
United States
Message 903673 - Posted: 4 Jun 2009, 18:06:54 UTC - in response to Message 903669.

When my RAC goes up I figure I've done something right for a change. When it goes down I start looking for a problem. I usually stay out of these credit discussions since I'm not all that concerned about how much they give me, just so long as it's the same for everybody.
____________


PROUD MEMBER OF Team Starfire World BOINC

Larry256
Volunteer tester
Send message
Joined: 11 Nov 05
Posts: 25
Credit: 868,703
RAC: 18
United States
Message 903684 - Posted: 4 Jun 2009, 18:48:16 UTC - in response to Message 903623.

So your your ready to make a stand against this proposal then.Message 701328

The issue that Henri started with on this thread: if credit scores are "normalized" so that the median host on SETI makes 100 credits/day, and all other projects are normalized so that the same median computer gets 100 credits/day, it does two things:

•It lowers credits because the median machine now gets a lot more than 100 credits/day.

•As machines get faster, the median moves up, and over time credit will go down

I'm not arguing for or against, but I am arguing that it would be good to understand the proposal before you condemn it.


I think your now against it because
I'm fine with your "HAL 9000" getting a billion credits per day.
,for the same reason I am.

LOL

Josef W. Segur
Volunteer developer
Volunteer tester
Send message
Joined: 30 Oct 99
Posts: 4202
Credit: 1,030,017
RAC: 264
United States
Message 903719 - Posted: 4 Jun 2009, 20:31:12 UTC - in response to Message 903623.


There is no reason to intentionally change the value of a credit. It is defined as:

Take a computer that benchmarks 1,000 double-precision MIPS based on the Whetstone benchmark, and 1,000 VAX MIPS based on the Dhrystone benchmark.

By definition, that computer should get exactly 100 credits per day.

That is the standard. Every combination of projects and work units should get 100 credits per day.

Credit adjustment is exceedingly unpopular. I would suggest that, while this is the right thing to do, that it any fix is likely to be nearly impossible without making a lot of users angry.

... as we've seen in this thread.


SO if my HAL9000 benchmarked 10,000,000,000 double-precision MIPS Whetstone benchmark,and 10,000,000,000 VAX MIPS based on the Dhrystone benchmark.
Would you have no problem with the insane amount of credit that it would do in a day?
Or would you want it to get the mean of all computers at that time?
Then years later when a new computer came out and was twice a fast has HAL would I get less because HAL is behind the mean?

Larry,

You're repeatedly missing one very important point.

The credit adjustment finds the median from the sample and calculates the credit based on "benchmark * time" and slowly adjusts the FLOPs-based score to match what "benchmark * time" would have given.

If the fleet gets twice as fast, the "middle" gets faster, benchmark will double but the time will be half, and the number is the same.

That's the basic misunderstanding. If the benchmarks double, actual productivity might quadruple since the science apps tune themselves to processor capabilities. That's mainly why my 200 MHz. Pentium MMX and 1400 MHz. Pentium-M have benchmarks near the 1:7 clock rate ratio but the Pentium-M host is around 29 times more productive. Core i7 hosts running at clock rates around twice that of the P-M tend to benchmark 2.5 to 3 times higher, but seem to be about 5 or 6 times as productive. Architecture improvements have fairly small effects on the benchmarks but are very worthwhile for real computations.

That calculation is then used to adjust the FLOPs calculation.

Cobblestones don't change just because the median credit changes. The standard is very concrete.

I've repeatedly said "a cobblestone is 1/100th of the daily output of a machine with these characteristics" -- the term "mean" or "median" does not appear in that sentence.

I'm fine with your "HAL 9000" getting a billion credits per day. You should worry about it opening the pod bay doors.

-- Ned


The benchmarks are very limited, the Cobblestone is an elastic concept because benchmarks are only using the most basic computational capabilities possessed by all hosts.

The original time * benchmarks credit method was a form of wages; each host took a little test and was 'paid' strictly on that basis. An 0.44x AR Enhanced WU takes about 388000 seconds on my Pentium MMX host and the credit claim would be about 95.1, the same AR on my Pentium-M host takes about 12602 seconds and the credit claim would be 27.4 or so. The older host would of course not be granted its higher claim, even the P-M would often be paired with faster hosts claiming less. The worst feature of the method was that it provided no motivation to improve science apps since they had no effect on how much 'pay' was given for a day's work.

The fpops_cumulative method is a piecework approach, and definitely more equitable in my view though not perfect. The server-side adjustment is needed because nobody has yet come up with a better standard than the Cobblestone though the benchmarks are increasingly poor measures of compute capability.

Joe

1mp0£173
Volunteer tester
Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 903737 - Posted: 4 Jun 2009, 21:22:58 UTC - in response to Message 903719.
Last modified: 4 Jun 2009, 21:30:43 UTC


The benchmarks are very limited, the Cobblestone is an elastic concept because benchmarks are only using the most basic computational capabilities possessed by all hosts.

<much removed>

Joe

[edit/clarification]I'm starting with the statement that "If the definition of a cobblestone is..." and going from there. Changing the standard is a different (and probably worthwhile) discussion, but it's a different conversation.[/edit]

I agree. My only caveat is that every benchmark is limited.

The best story about this came early in my computing career, back when mainframes were the hot ticket (and smaller computationally than most PCs).

There was a competition. The prospective buyer put out bids, and as I remember the competition was for a new mainframe, and they scored based on how well each system ran a benchmark.

Each vendor had an early optimizing compiler. I don't remember the language but it was either Fortran or COBOL.

... and the benchmarks were compiled and run.

One bidder reported the time: 0 seconds.

The program had exactly two output statements, one that printed "start" and one that printed "done." That bidders optimizer traced back through the code and eliminated every statement that did not contribute to the output.

Trouble is, not only is the Cobblestone defined in terms of benchmarks, but in terms of two specific benchmarks.

If we're actually measuring credit in Cobblestones, then we have to remain faithful to the definition.
____________

1mp0£173
Volunteer tester
Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 903738 - Posted: 4 Jun 2009, 21:28:32 UTC - in response to Message 903684.

So your your ready to make a stand against this proposal then.Message 701328

The issue that Henri started with on this thread: if credit scores are "normalized" so that the median host on SETI makes 100 credits/day, and all other projects are normalized so that the same median computer gets 100 credits/day, it does two things:


I have because the post you referenced is factually incorrect.

The post says the plan is to normalize credit so the median host gets 100 credits per day.

What the script does (or at least tries) is normalize so that a moving average of 30 median hosts, selected from a daily sample over the past 30 days, would get the same credit using FLOPs as they would get using benchmark * time.

Those are very, very different.

____________

Profile [seti.international] Dirk Sadowski
Volunteer tester
Avatar
Send message
Joined: 6 Apr 07
Posts: 7022
Credit: 59,228,844
RAC: 20,536
Germany
Message 903871 - Posted: 5 Jun 2009, 4:58:23 UTC


Thanks to all!


Hmm.. finally.. someone [I mean someone, not all..! ;-)] should PM the Berkeley crew because of the 'credit adjustment script' which run periodically?
Because to choose only CPU only PCs without CUDA GPUs for the 'average calculation'?

____________
BR



>Das Deutsche Cafe. The German Cafe.<

Profile Westsail and *Pyxey*
Volunteer tester
Avatar
Send message
Joined: 26 Jul 99
Posts: 338
Credit: 20,538,216
RAC: 0
United States
Message 904130 - Posted: 5 Jun 2009, 23:15:43 UTC - in response to Message 903738.
Last modified: 5 Jun 2009, 23:26:52 UTC


What the script does (or at least tries) is normalize so that a moving average of 30 median hosts, selected from a daily sample over the past 30 days, would get the same credit using FLOPs as they would get using benchmark * time.

Those are very, very different.

*thumbs up*
That is the first time it has been explained so I understood. That is ideal and makes perfect sense.

The voice I think many are raising, as well as my own previous concerns; were that credit would continually be "adjusted" so that my hot new machine today (sitting in 2012) would get the same rac my previous hot bang new machine had gotten when it what first brought online years prior. Where as it is actually doing say 4x the work the previous host did in the same time.

Thanks!
____________
"The most exciting phrase to hear in science, the one that heralds new discoveries, is not Eureka! (I found it!) but rather, 'hmm... that's funny...'" -- Isaac Asimov

Profile -=SuperG=-
Avatar
Send message
Joined: 3 Apr 99
Posts: 63
Credit: 51,568,051
RAC: 12,826
Canada
Message 905605 - Posted: 9 Jun 2009, 23:32:24 UTC

Is this why my RAC has dropped from 72,000 back in May to 61,000 today? Or is there some other sinister workings going on?
____________
Boinc Wiki




"Great spirits have always encountered violent opposition from mediocre minds." -Albert Einstein

Profile Byron S Goodgame
Volunteer tester
Avatar
Send message
Joined: 16 Jan 06
Posts: 1151
Credit: 3,936,993
RAC: 0
United States
Message 905609 - Posted: 9 Jun 2009, 23:40:41 UTC - in response to Message 905605.
Last modified: 10 Jun 2009, 0:05:18 UTC

That appears to be what's going on, and from what I'm seeing of my pending credits, the trend seems to still be going down, so I'd expect it will drop further.

Edit: I have to admit I prefer Ned's response after mine. I prefer the humor, though I thought his name was J.S.G. Boggs. LOL
____________

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · Next

Message boards : Number crunching : again less credits?

Copyright © 2014 University of California