Message boards :
Number crunching :
credits and workunits: a fair system needed for Boinc
Message board moderation
Author | Message |
---|---|
nemesis Send message Joined: 12 Oct 99 Posts: 1408 Credit: 35,074,350 RAC: 0 |
continue the discussions please |
Sirius B Send message Joined: 26 Dec 00 Posts: 24912 Credit: 3,081,182 RAC: 7 |
Thanks Phud. Seti Classic pulled in a lot of crunchers & with each level of achievements (set by Seti project) one was able to print/save a certificate showing that achievement. There were no credits involved, yet people crunched the project & at a pace that would not be imaginable today with the current state of technology. Anyone for 56k dial-up? Personally, I used to enjoy watching the hours accumulate especially as I only used to use the Net from a Fri evening at 6pm until 8am Mon morning as BT had a rate of 0.01p a minute. Thought for the day. With some of the crunching beasts on the projects, I wonder what total hrs will be racked up per day? Would be nice to find out. (not by using a speadsheet guys, by crunching...lol) |
Bill Walker Send message Joined: 4 Sep 99 Posts: 3868 Credit: 2,697,267 RAC: 0 |
A bazillion credits, plus a buck fifty, will buy you a cup of coffee. Credits have some meaning to an individual, to provide some measure of crunching power as affected by settings, new hardware, time dedicated, etc. Beyond that they have no meaning. None. Zero. Zip. One user can work a lot harder to get 50 credits than another user does to get 50 million. Who is the better cruncher? What is a better cruncher? Both questions defy a meaningful answer. Unless you are ready to personally fund an entire project, the 50 RAC user is just as meaningful as the 50 million RAC user. They are both just somebody doing what they can. Therefore, do what you like with credit, just don't keep changing it. If double credit makes you feel good, good for you. It just confuses me. |
Sirius B Send message Joined: 26 Dec 00 Posts: 24912 Credit: 3,081,182 RAC: 7 |
But that's the point Bill. Until the economy went pearshape, I used to run a nice little farm (still do but not as large) with it being mainly used for business purposes. During the time the full farm ran, it was nice to see the credits mount up & the regular competition between team members on my team forum. Since the credit situation has been changed & not for the 1st time, they have begun to become totally meaningless & the competition seems to have jaded & that's a shame. The science is still getting done, but to me at least, it seems it's beginning to lack the fun of participating. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Personally I liked the old system of counting the number of tasks a system did. I can think of a "credit system" that would work for S@H, but I don't know how it work work for other projects. With this project tasks in the .42 AR are "normal" with higher and lower AR taking more or less time. So credit in the "normal range would be 1. Then as the AR get closer to each side of the VLAR and VHAR credits would be higher and lower. With 0.01 being the lower cap and an appropriate value being the high cap. Perhaps 2.0 being the max would be a good artificial limit, but whatever number it might be should be based on some kind of scale that someone without a degree in mathematics can figure out in a few minutes. I imagine most BOINC projects wouldn't work for a system like this. That is why they came up with a very complex system that few understand. That way no one can tell if it is actually working. :) SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
rob smith Send message Joined: 7 Mar 03 Posts: 22534 Credit: 416,307,556 RAC: 380 |
Any credits system must take account of the actual processing rate, normalise it, then allocate a fair score per normalised second. Thus if you have a mega fast processor which turns round WU in a very short time you will get (or should get) the same number of brownie points as a person with a very pedestrian processor - for a given "standard" WU. The problem is defining the standard WU, and countering those who gain advantage by moving WU between processors. Neither of these is an easy problem to solve. Particularly when one considers that a given cruncher might be crunching 24/7 for a few weeks then doing 0.5/7 for the next month just because that's the owner's lifestyle. Currently the credits model uses "peak flops" which I don't think is the right model, because it is very easy to get a very high peak flops while having a very low "average flops". One way around this might be to periodically distribute a standard WU and use that to calibrate each cruncher. But this has flaws - what happens if you change the hardware, or software, between calibration runs? One solution is to rerun the calibration if a "significant anomaly" is detected, and all results for that cruncher be frozen until a calibration cycle is completed. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
nemesis Send message Joined: 12 Oct 99 Posts: 1408 Credit: 35,074,350 RAC: 0 |
i think that in its pure form just stating the hours spent plus wu's completed were the honest way to go. the recent "doubling" and "pay for doubling" does nothing but show how the credit system can be corrupted for no real reason. if you have to resort to cheap gimmicks to keep a project afloat what does that really say about the honesty of the project. |
Sirius B Send message Joined: 26 Dec 00 Posts: 24912 Credit: 3,081,182 RAC: 7 |
i think that in its pure form That is the best statement I've seen regarding all the credit debates I've read for the past 4 years. |
Bill Walker Send message Joined: 4 Sep 99 Posts: 3868 Credit: 2,697,267 RAC: 0 |
i think that in its pure form That sort of worked, until GPUs came along. It also gets very meaningless very quickly if you start comparing a S@H shorty WU (minutes to complete) to a CPD WU (months to complete on the same machine). Does the relative completion time reflect the relative value of the science? Now there's a question to debate. Despite my somewhat cynical earlier post, I do realize that credit is one way of attracting new crunchers, and one way of encouraging crunchers to upgrade hardware. We have to recognize that, and that is where things get complicated. Personally, I'm in it for the science. And the free toaster. |
nemesis Send message Joined: 12 Oct 99 Posts: 1408 Credit: 35,074,350 RAC: 0 |
i think that in its pure form if you look at boincstats for instance.... you see a breakdown of credits achieved at different projects. the same could happen with a wu completion + hours processed. say something like this seti 100 wu's completed, 1000 hours processor time. einstein@home 101 wu's completed, 956 hours processor time. it would give meaning to each projects processing needs. a processor intensive project would essentially be shown by its ratio of wu's/processor time... |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13854 Credit: 208,696,464 RAC: 304 |
it would give meaning to each projects processing needs. Which is what Credits do. Grant Darwin NT |
nemesis Send message Joined: 12 Oct 99 Posts: 1408 Credit: 35,074,350 RAC: 0 |
really? how so? |
Bill Walker Send message Joined: 4 Sep 99 Posts: 3868 Credit: 2,697,267 RAC: 0 |
I'm beginning to see some merit in Phud's idea. The individual user can track hours per WU to see if their tweaks and hardware upgrades have the intended results. Changes in WU size and/or complexity (which will continue to happen) will just require you to "wait a bit" till the numbers settle down again. Still not sure about cross project comparisons though. Anything done will be subject to "players". If Project X make WUs shorter, will that attract some users? Probably, but nothing is gained for the science. If an improvement in Project X science results in longer WUs, will that motivate some users to switch to another project? Probably yes again, despite an improvement in science. This is my final offer. Do what you like, I'm here for science. If we scare off some users, [shrug]oh well[/shrug]. I haven't seen a forum yet at any project with people complaining about too many users (except occasionally here at S@H). All I ever see is people demanding more WUs. Whether their motive is more science or more credit is really irrelevant, as long as more science gets done. |
Tazz Send message Joined: 5 Oct 99 Posts: 137 Credit: 34,342,390 RAC: 0 |
i think that in its pure form Waaaaay back in '99 when I started crunching it was because I found the xx.xx.SETI usenet newsgroup. In it people were posting their times per wu; Guy1-"I can do one wu in 3hrs and 10 min.". Guy-2"Well I can do one in 2hrs and 58 min.". Guy-1"What have you got (motherboard setting) set to?". And so on. I had just spent a load of money for a 233 MMX PC and wanted to make sure it was running comparable to other 233s. Running a wu, checking the time, changing some settings, getting another wu ... It was fun. But I digress. When we put a new GPU through it's paces how to we usually share the results? One normal AR wu in xx minutes. Simply because it takes a while to get an inaccurate RAC that most people only see as 'higher is better'. I'm all for the hours spent on wu's completed. SETI@home classic workunits 12,500 SETI@home classic CPU time 54,781 hours </Tazz> |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
The best credit system we've had was the original benchmark * time. Because bechmarking is an inexact science, claimed credit wasn't really consistent, but requiring three results for a quorum, and throwing out high and low produced a fair and reasonably accurate result. (Fair means "equitable" -- everyone got a bonus sometimes, but the odds of a bonus, or a low result, were constant) Unfortunately, that doesn't work for GPUs unless BOINC has benchmarks for CUDA, OpenGL, or whatever. The big problem we have when we leave benchmark * time is a lack of *comparable* credit from project to project. Science applications vary, and projects have to determine a scaling factor to convert "flops" into credit. Some projects don't care, and don't put much effort into awarding credit. Others attract credit hounds by awarding more credit than they should. In an idea world, credits would scale automatically between projects. Still, it's amazing how "nothings" turn into "money," and people spend actual money to collect "BOINC money." |
James Sotherden Send message Joined: 16 May 99 Posts: 10436 Credit: 110,373,059 RAC: 54 |
Interesting read, and some good solid points. Id go for the hours and work units crunched. But really I crunch because I believe in Seti@Home. Yes I have back up projects, But only use them when we are down for days and Im out of work. If Dr. A. says no more credit will be awarded I will still crunch. I dont care what other projetcs pay in credit. But maybe we can have Dr. A. make a picture of a Seti@Home toaster:) [/quote] Old James |
Gary Charpentier Send message Joined: 25 Dec 00 Posts: 31009 Credit: 53,134,872 RAC: 32 |
I'm not sure there is any fair way to compare across projects. The issue is that some projects use only integer math and others only floating point math. At present only the floating point speed benchmark is used in calculations. An arbitrary conversion can be done saying that 5 integer adds = 1 floating point add, but is that fair across all processors? I think people are forgetting the original reason they went from units to credits. In the old system a shorty counted as much as a long. Not fair. But CPU time was also counted. That took the short/long into account, but did not take into account processor speed. The credit system was supposed to take both into account and come up with a single number. They tried benchmarks to see how fast the processor ran. What they got was a very crude system that is only fair if the science is the benchmark. As I understand creditnew it is more to prevent user fraud in open source than it is a serious attempt to even out credits. It addresses a optimized app that reports 100x work by comparing the claimed credit numbers with other machines. It can't do much about a project that claims 2X for every unit. Now perhaps if the credit software consulted manufacturers data sheets on the number of cycles for each instruction and the speed of the clock on the system something close to fair could be implemented. Then each science application would have to be counted as to how many of each type of instruction was run. Not going to happen. So what has happened is that each project admin has been left on his word that his correction factor is honest. As crunchers say different projects pay different amounts, we know that the correction factors are not set uniformly, can't say if that is intentional or not. I think most people believe it is intentional in some cases. I don't know how the BOINC software can catch that unless it debugs the science application to count instructions (which would make it crawl) Or do science applications need to be registered with a central authority, which would end optimized apps. This isn't ever going away. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
I do miss the old days of seeing how much CPU time I had contributed. Seeing the stats as: Results received: 93865 Total CPU time: 98.567 years Average CPU time per work unit: 9 hr 11 min 55.7 sec This gave me several goals to work towards. One of the big ones I was working on was an average time under 10 hours. I was hoping to end with 100,000 wu's and 100 years of CPU time, but I missed that mark. With the current credit system(s) I can really only work towards credit number goals. Like "try to make these machines to make 100,000 credits this month". Which doesn't really tell me how much work the machine is doing, just that it gets granted that amount of credit. Maybe they could implement these values again. Not to replace the current credit system, but to give us something else to track for fun. I imagine not all of this information has been stored since the switch over to BOINC. SO they would have to pick an arbitrary date to implement it. Much liek they do for the change in the credit system(s). SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
nemesis Send message Joined: 12 Oct 99 Posts: 1408 Credit: 35,074,350 RAC: 0 |
"I think people are forgetting the original reason they went from units to credits. In the old system a shorty counted as much as a long. Not fair. But CPU time was also counted. That took the short/long into account, but did not take into account processor speed." this is why a simpler method is better... if you're running a pentium 4 and getting 1 wu every 12 hours and you see someone else getting 48 wu's in 12 hours you'll look and see someone running opt's apps and a faster cpu/gpu. is that discrimination? no...you choose to crunch on what you have at your disposal. that very scenario could make the p4 user look into opt'd apps.. and maybe upgrade. i'm not saying its perfect to run wu's completed/hours processed, but it'd be a lot less corrupted that the system is now. |
SciManStev Send message Joined: 20 Jun 99 Posts: 6658 Credit: 121,090,076 RAC: 0 |
I agree with Bill Walker in that the science comes first. That is the reason I joined, and is the reason I will stay until I drop dead. How credits are calculated means less to me, as long as we are all using the same system. I do enjoy seeing how my rig compares against others, as it is natural human competition. As far as cross projects go, I see it as "When in Rome, do as the Romans do." What ever system is employed at each project gives one a sense of how ones rig is performing against others doing the same tasks. To me it doesn't matter about total BOINC credits or rankings, as the credit systems are different. They make the most sense in the project they are in. Recently my rig was in the #3 spot of top computers. This was partially due to the amount of data it crunched, and partially due to what others did not crunch. Back during the Great Outage, I crunched for Einstein, and also brought my rig up to the #3 spot. In Einstein I got further with CPU power and hyperthreading. With Seti I get further with GPU power, and no hyperthreading. For what ever credit system is used, I learned to tune my system to get the most from it based on RAC and credits. In both cases, I did quite a lot of science, and that pleased me no end. Steve Warning, addicted to SETI crunching! Crunching as a member of GPU Users Group. GPUUG Website |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.