Message boards :
Number crunching :
Boinc Credit - Cobbles, New, Screws, and Another New Idea?
Message board moderation
Previous · 1 · 2
Author | Message |
---|---|
![]() Send message Joined: 25 Nov 01 Posts: 21695 Credit: 7,508,002 RAC: 20 ![]() ![]() |
If it matched the original definition of the "Cobblestone" as being based on the number of floating Point operations in a given time period by a theoretical processor then we wouldn't see the vast variation in credit awarded for a given run-time... Two thoughts that follow on from that:
See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
Sirius B ![]() ![]() Send message Joined: 26 Dec 00 Posts: 24929 Credit: 3,081,182 RAC: 7 ![]() |
Myself, my science/engineering bias favours keeping the credit referenced to some real measurement...Maybe a DC version of Moore's Law perhaps? |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13918 Credit: 208,696,464 RAC: 304 ![]() ![]() |
* What resource do we 'reward' with what credit value? Reward the time taken, or work done, or energy consumed, or what? Work done, ie FLOPS, the very thing the Cobblestone is based on. Why would you reward people just for time taken or power used? That would just encourage the lowest possible level of efficiency, and taking as long as possible to return work. * Note that the combination of how credit is calculated and the effects of optimized apps persuaded various 'competitive' users on s@h to favour nVidia GPUs...Pay people for the work done, Those that are competitive will then go for the most efficient options. ie- most work done in the shortest time possible. It's not rocket science, there is already a defined reference to determine what work was done & how to acknowledge the work done. It just needs to be used. Grant Darwin NT |
W-K 666 ![]() Send message Joined: 18 May 99 Posts: 19616 Credit: 40,757,560 RAC: 67 ![]() ![]() |
An indication of how badly Credit Screw decreased credits can be judged by comparing Seti to Einstein. Einstein went to fixed credits when Dr A introduced Credit Screw to BOINC. My computer on a good day at Seti, processing Seti 100% throughout March, managed for a few days an RAC just over 30,000. After all Seti tasks were cleared, then it has gone to Einstein 100% and my RAC there is over 200,000 and still climbing. If it tops out at 210,000 then Einstein grants credits at seven times the rate of Seti. |
Ian&Steve C. ![]() Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 ![]() ![]() |
I don’t really care how much or little credits a project awards as long as its consistent with work performed WITHIN that project. It’s pointless to try to compare what one project awards with another, and they should be free to award whatever they want. Where possible, the WUs themselves should be worth roughly the same no matter what device processes it. Faster devices will end up with more credits simply by crunching WUs at a faster rate. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours ![]() ![]() |
W-K 666 ![]() Send message Joined: 18 May 99 Posts: 19616 Credit: 40,757,560 RAC: 67 ![]() ![]() |
I don’t really care how much or little credits a project awards as long as its consistent with work performed WITHIN that project I then have to ask why did/do you crunch at Seti, where that was not the case. |
Ian&Steve C. ![]() Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 ![]() ![]() |
1. Because of my interest in the project. 2. WUs (of the same type) were more or less worth the same per unit time. WUs that took longer to crunch (on a certain device) generally “paid†more. Yeah there was some variation but it wasn’t grossly unfair for one device vs another, faster devices and faster apps earned more by doing more work. it wasn’t perfect, but it was good enough. I just don’t think there should be ANY comparison between projects. Who cares that an i3 CPU can make a million credits a day on a project like Collatz. If you want to see someone’s standing, look at their position within each project, not how many total funny-money BOINC credits they have. They’re worthless after all. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours ![]() ![]() |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14690 Credit: 200,643,578 RAC: 874 ![]() ![]() |
I have accepted the reality that David lost all meaningful control over Credit when he introduced CreditNew - it became a meaningless, dimensionless number. Only useful, as others have said, within the closed world of each separate project. It's a shame, because the original definition did have an objective reality which, if handled properly, would have had meaning across the wider BOINC community and outside. What I DON'T accept, and strongly object to, is David continuing to assert that the credit total, converted back at par to a Flops count, is an objective, scientific, measure of the power of Distributed Computing. The front page of the BOINC website claims, today, that we have a collective daily production of 30.583 PetaFLOPS. Most of that seems to come from Collatz. I think it's close to scientific fraud. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13918 Credit: 208,696,464 RAC: 304 ![]() ![]() |
and they should be free to award whatever they want.Which is a shame because the whole point of the Cobblestone & Credit was to allow valid & meaningful comparisons between projects, not just between systems on a particular project (and even there it's not that useful as it intentionally penalises and reduces the Credit it pays out to applications /hardware it considers to be less efficient than it's theoretical maximum possible). Grant Darwin NT |
Ian&Steve C. ![]() Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 ![]() ![]() |
and they should be free to award whatever they want.Which is a shame because the whole point of the Cobblestone & Credit was to allow valid & meaningful comparisons between projects, not just between systems on a particular project (and even there it's not that useful as it intentionally penalises and reduces the Credit it pays out to applications /hardware it considers to be less efficient than it's theoretical maximum possible). As I recall, Cobblestones are ultimately based on FLOPS? but BOINC cant calculate flops properly, especially on GPUs where they use manufacturer supplied info, and not all projects even use the same types of calculations making a flops based credit kind of meaningless anyway. How are you going to base something purely on flops when you have projects like Milkyway which use double precision at a fraction of the flops as another project using single precision? this is why trying to compare between projects is pointless no matter what points system you want to use. just look at the rank/standing/RAC within a project and don't try to make any comparisons between them. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours ![]() ![]() |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13918 Credit: 208,696,464 RAC: 304 ![]() ![]() |
As I recall, Cobblestones are ultimately based on FLOPS? but BOINC cant calculate flops properly, especially on GPUs where they use manufacturer supplied info, and not all projects even use the same types of calculations making a flops based credit kind of meaningless anyway. How are you going to base something purely on flops when you have projects like Milkyway which use double precision at a fraction of the flops as another project using single precision?By doing as Seti did before Credit new, FLOP counting with scaling factors to account for discrepancies with a fall back to using the project supplied job estimates. Grant Darwin NT |
rob smith ![]() ![]() ![]() Send message Joined: 7 Mar 03 Posts: 22753 Credit: 416,307,556 RAC: 380 ![]() ![]() |
SETI never used the manufactures' published data, it tried to work out what the scaling factor should be based on a GPU speed being a fixed multiplier of a "standard" CPU speed, then further massaging the figures until they sort of worked, which sounded fine in theory. But in so doing there was no attempt to validate the fudge factors, just keep on changing them in the hope that they would stabilise quickly, and this was done on a per-host basis. By using data from the client rather than using the figure already calculated for the task (albeit that figure was based on a hypothetical processor) it was prone to any change in host hardware, application or use and could go off in a huff with one host for no apparent reason. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14690 Credit: 200,643,578 RAC: 874 ![]() ![]() |
The credit system(s) long predate the arrival of GPUs. Refer to the First, Second, and new (third) credit systems in CreditNew. Reading that document again now, there are some curious anomalies. The First credit system is described as being based on "the CPU's peak performance", whereas in reality it was based on BOINC's implementation of the Whetstone benchmark measurement code. We in the community have always taken that to be a lowball figure, because the published definition of Whetstone emphasises the need to eliminate distortions caused by compiler optimisation - and we were trying to optimise the hell out of the best compilers available at the time! But the interesting one for this discussion is the Second credit system - 'flopcounting', for short. Eric estimated the computational cost - in flops - of each type of inner loop in the processing of a task, and counted how many times the loop was executed. That worked for optimisations (using SIMD or whatever) that speeded up the loop, but failed for optimisations which skipped loop executions which weren't going to contribute to the final signal data. But neglecting those optimisations, the counting approach led to stable and consistent credit awards across the range of different task types, and with the single, unchanging, 'credit multiplier' fiddle factor, credit was adjusted to be comparable to the results given by the first credit system. But David adjudged that Eric was probably the only BOINC project administrator who would be painstaking enough to perform that initial estimation of the computational cost of their algorithm. He may well have been right, but we'll never know. So the third (new) credit system was designed to require the absolute minimum of hard, factual, input from project administrators. Too little, in my view. It's all averages, feedback loops, and normalisation. It bears no relationship to real work at all - and I wouldn't have wanted my salary at work to have been calculated that way. |
![]() ![]() Send message Joined: 23 Aug 99 Posts: 962 Credit: 537,293 RAC: 9 ![]() |
I think when a project issues more tasks than are required for the quorum, once the quorum is met, they should reduce any credit awarded to unnecessary spare tasks a few days after the needed results. Something like: Up to 3 days after validation and quorum; 90% of granted credit Up to 7 days; 50% of granted credit Over 7 days after validation, but before the deadline 10% of granted credit. Projects could also give bonus credit for hosts that consistently produce rapid turnaround of tasks. How that should be calculated, I will leave others to discuss. |
Ian&Steve C. ![]() Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 ![]() ![]() |
don't think that would work well. someone with a "slow" host, it's not their fault that they turned it in later than a fast host, so they shouldn't be penalized. if you're trying to incentivize people not crunching tasks that aren't needed, well theres no infrastructure in place to effectively check if your task is needed or not from a client side. even if you wanted to check every one of your tasks, look up the results to see if someone already submitted, that would be pain staking and too much effort to try to look up thousands of tasks, find the one(s) not needed in your list to potentially abort them. something like that just isnt feasible. it's up to the project to not send out that many resends in the first place. I have no problem with a bonus for fast turn around though. that's what GPUGRID does and it seems to work for them. +50% if returned in 24hrs, +25% if returned in 48hrs. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours ![]() ![]() |
![]() ![]() Send message Joined: 23 Aug 99 Posts: 962 Credit: 537,293 RAC: 9 ![]() |
I think the current situation at SETI is fairly exceptional, most projects don't usually send out 6 tasks when they only need 2 results. The deadlines should have been cut to 7 or 14 days at the end of March, or earlier. If a project were to routinely send out more tasks than needed for quorum, they should enable code to reduce Unnecessary waste of time or effort. Some projects need rapid turnaround, while others can usually wait a few weeks or months for the results. They can't really change the rules after the tasks have already been sent. |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.