Message boards :
Number crunching :
again less credits?
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · Next
Author | Message |
---|---|
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
It has been stated elsewhere that even SETI has historically been overpaying, and now that they're trying to actually tie credit to the Cobblestone, that probably tends to lower credit. I think Richard may have something when he talks about the effects of CUDA. I know that there are all kinds of issues with counting flops. I don't know, and I can't get that excited about it. My question is: do we bow to those demanding more credit, and turn the Cobblestone into "fiat money" or do we try to push toward a stable currency based on a measurable commodity? There are so many things that work against that, including the fact that the median machine isn't running the same architecture as the mythical "100 cobblestone computer." |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
Richard Haselgrove wrote:
Maybe, maybe not. I think the enumeration which gathers the 10000 results proceeds in host ID number order, and given that there are on the order of 950 thousand results waiting for purging I suspect no host with an ID above 2000000 is likely to be considered at all. While some users manage to upgrade their systems without getting a new ID, I think that the low numbered hosts probably don't include many with CUDA or the latest and greatest CPUs. That's bad statistics, but it's why I think the erosion we're seeing is mainly just older hosts retiring and thereby shifting the median up. My 200 MHz. Pentium MMX benches around 165 and 339, my 1.4 GHz. Pentium-M around 1230 and 2520 so the sums have a ratio around 7.44. For actual crunching the ratio is more like 29. And hosts with benchmarks twice my Pentium-M's are considerably more than twice as productive. It's a major flaw in the BOINC benchmark system. Every time a host is double-entered in the long list, it reduces the diversity and representativeness of the short list. Yes, but if a host is above the median it matters not how far it's above. Agreed, it's bad statistics, but for the purpose of this credit adjustment I don't think it hurts. Do you know if anything representing plan_class is stored in the result table? I think it would be fair for the results my CUDA hosts return from the 603 CPU app to be included in the 10000 lottery, but results from the 608 CUDA app should be excluded from the average. That should still be OK for the VLARs which are sent out as 608 CUDA but I return as 603 CPU (not so sure about Raismer's Perl script, which can do the reverse transition). The plan class is not in the result database, though the app_version_num is (the one the Scheduler told the host to use). The only other alternative would be to exclude CUDA in the hosts table, on the grounds that there should still be enough MB work to get a reasonable average from 10000 results from non-CUDA hosts. As long as CUDA hosts are a minority, their results will be above the median using the present system. If the BOINC devs get around to redoing everything in elapsed time terms, that might move some 8400GS or similar CUDA hosts down to the vicinity of the median, and I'm not sure that would be a good thing. Joe |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
A wee bit of a history lesson might be in order at this point. Nope. All computers can be referenced to the gold standard, no computer is required to meet it. Take the old credit system. Any computer under the old system would get credit based on the benchmark times the number of seconds. So, let's say your "reference" machine did a 20 credit work-unit in about 1700 seconds. Using the FLOP count method, it wanted 22 credits. You retire it and get a machine that goes ten times faster. It does a 20 credit work unit in about 170 seconds (it's ten times, faster, right), and the FLOP count is the SAME so it wants 22 credits. With either machine, we can calculate what would have been requested using Benchmark * Time, and what was requested using FLOPS, and using the (made up) numbers in the example, the multiplier would be 0.9. That's how it'd work in a simple world. There are so many other factors that affect this (cache size, memory speed, processor efficiency, etc.) that we'll never have a perfect credit system. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14666 Credit: 200,643,578 RAC: 874 |
As long as CUDA hosts are a minority, their results will be above the median using the present system. If the BOINC devs get around to redoing everything in elapsed time terms, that might move some 8400GS or similar CUDA hosts down to the vicinity of the median, and I'm not sure that would be a good thing.Joe Recording elapsed time, and using it in this calculation, would be a help. But if they're going to do that, they also need to record and use some sort of benchmark figure for the GPU (even if just the estimate BOINC gets from the card specification at startup) - and that for the particular GPU used for the task, given that a single host can have multiple CUDA cards of different speeds. |
Larry256 Send message Joined: 11 Nov 05 Posts: 25 Credit: 5,715,079 RAC: 8 |
We'll never have a Fair credit system. |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
Depends on what you mean by fair. If fair means equitable then any credit system that grants credit without discriminating or favoring any one group is "fair." ... and we had that. Trouble is, it wasn't highly repeatable -- the numbers varied a lot -- but it averaged out. What we have now is more influenced by CPU architecture than the old "benchmark * time" system -- some processors are favored over others. |
Conan Send message Joined: 30 Aug 05 Posts: 15 Credit: 1,585,618 RAC: 0 |
From what I am seeing my AMD X2 4800+ machine used to get around 20 to 22 credits an hour running SETI, and I got this for a long while. I have been noticing that the amount per WU has been dropping but have not worried too much till I did a calculation or two. Now I currently am only getting 10 to 12 credits an hour. This is most assuredly a large drop. Whether it is produced by CUDA influence or SETI in general I have already reduced my meager input even further as running the standard application on my limited computer resources I need all the credit I can get and now I am not getting it. It appears that without an optimised application or a CUDA GPU card it no longer pays to run SETI. Even Rosetta which has always underpaid SETI, now pays more. |
Larry256 Send message Joined: 11 Nov 05 Posts: 25 Credit: 5,715,079 RAC: 8 |
So your saying the same thing I am, the current system is unfair.Why can't everybody see that? |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13822 Credit: 208,696,464 RAC: 304 |
So your saying the same thing I am, the current system is unfair.Why can't everybody see that? Because as much as people try to, fairness can't be strictly defined. It's very much subjective. Grant Darwin NT |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
No, I'm saying the opposite. Everyone can, if they wish, run the optimal hardware for SETI, hardware that will produce the best credit per hour possible. If you want to get the absolute maximum credit, there is enough information in the forums to do that. ... but, if you do a given workunit, and I do the same workunit, and we both return valid results, we will get exactly the same amount of credit. Equal pay for equal work -- what could be more fair than that? The odds of getting a good-paying work unit or a poor-paying work unit are the same for everyone. So the distribution of work is fair. Everyone has the opportunity to favor one kind of work or another. There is no discrimination in multibeam vs. astropulse, so that's fair. Your argument seems to be that credit should be more consistent and while I agree that's a good goal, it's not possible without a lot more accounting in the science application, and I'd rather not see the time wasted on accounting. But it is obviously completely fair. |
Larry256 Send message Joined: 11 Nov 05 Posts: 25 Credit: 5,715,079 RAC: 8 |
So your saying the same thing I am, the current system is unfair.Why can't everybody see that? OK Let me reword it.. Can we have a credit system that is less influenced by CPU architecture than the old "benchmark * time" system?Maybe one that won't devalue the credits over time for the amount of work done. If I have a computer that is doing a job today and get paid 5 whatevers.Next year for doing the same work it should get paid 5 whatevers,not 4.2 because a new computer did the same job in less time.The new computer should get paid 5.8 whatevers. Some people want it the first way,and some want it the second way. Whatevery way it is they need to be up front about it.Not saying, Well, we been over paying and need to fix it every 6-9 months.Then spread the word to most every other project and say they are now overpaying.It's getting old. I need another drink.:) lol |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
So your saying the same thing I am, the current system is unfair.Why can't everybody see that? If it was an easy problem, there would be an easy solution. The easiest system would be to just count workunits, but the range of work to be done from the "shorties" through Astropulse is pretty large. Under BOINC, credit should be comparable across projects, and some projects have work that lasts a few minutes, up through CPDN. Clock time would account for that, but there is no incentive for those with faster hosts. My C7-D (which will successfully finish an AP work unit in 200 hours -- optimized) would get paid at the same rate as Mark's cryogenically-accelerated i7. A benchmark gives us a rough measure of performance. The flaw is in writing a benchmark that doesn't get mangled when BOINC is compiled by different compilers (GCC vs. MSVC), or even "improved" by updated compiler switches. So, we try to count FLOPs, but we're ignoring the fact that a floating-point ADD is a lot faster than a floating-point COS() -- both count as "1" -- and we aren't counting each individual FLOP (which would double the number of instructions and slow things down) we're counting passes through some big loops, and they're probably estimating the number of FLOPs in libraries. Counting FLOPs completely ignores memory architecture and speed, it ignores all of the non-floating point work, all of the flow control. ... and it ignores the fact that for Multibeam, the mix of "fast" and "slow" FLOPs are different for different angle ranges, which is why some angle ranges overpay, and why your RAC can take a dive if the telescope is doing certain studies (and moving in certain ways) that generate lots of "unfavorable" angle ranges. I don't pay that much attention to which is which. It's possible to adjust the "FLOP" count to reflect how well a given CPU type does floating point, but it would vary by project. That also ignores the off-chip architecture. So I think ultimately, you have to decide: do you get upset about the fact that the problem is far more complex than it seems it should be, or do you accept it, and if you really care about credits, take advantage of the credit system. But I'm reminded of the Golgafrinchams, who (according to the Hitchhiker's Guide to the Galaxy) declared the leaf as their official currency. Trouble is, leaves weren't worth anything, so it took several deciduous forests to buy one ships' peanut. A million BOINC credits and $5 will get you a cup of coffee at Starbucks, if you don't order something too fancy. But it is still fair. Everyone has the same opportunity to maximize their return, to buy the best hardware, to use optimized apps. that do the work faster -- and we're all equally likely to get hit by the occasional old client or low credit claim. |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
I missed this in the first reading, and it's important. Under both the benchmark * time system, and flop counting, if you do a work unit and get 5 cobblestones, and the new ultra-mega-go-fast does it ten times faster, it should get exactly 5 cobblestones. They've done the same work, they are supposed to get exactly the same credit. Counting FLOPs does this very accurately. The machine going ten times faster will complete nine more work units while you're doing that one, it gets paid more because it did ten work units, and you did one. That's the idea, that's the goal. The problem is implementation. You can't really tell if you've got drift until you run for a while and see the drift. If a project sees that whatever it is doing (new client, credit adjustment, whatever) has caused a drift upward, some will be angry because their old credits aren't worth as much as new work. If a project sees that whatever it is doing (new client, credit adjustment, whatever) has caused a drift downward, some will be angry because their new credits aren't worth as much as old work. In other words, there is no way to correct faults without drawing much criticism. |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 66125 Credit: 55,293,173 RAC: 49 |
Oh I do get It: PC A(Not overclocked and No Cuda) gets paid for one WU per hour. PC B(Is overclocked and has Cuda) gets paid for ten WU per hour. Both PCs get paid the same amount per WU, It's just that PC B does more work and so It gets more work units done than PC A. Mind You the PCs are proverbial PCs, Not actual PCs. The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
Oh I do get It: They're proverbial in the sense that I pulled the numbers out of thin air. ... but I think it's a safe guess that for any given machine, we can find another that does either ten times the work, or one tenth of the work. My slow cruncher took 15909.64 seconds to finish one Multibeam WU with the AK v.8 optimized build from Lunatics. My wingman did the same workunit in 2,058.11 (with the stock client). It's a quad core, so probably four-at-a-time. So, that Macintosh is something like 30 times faster? It gets paid 30 times in the time it takes "the slug" to get paid once? I'm okay with that. |
Larry256 Send message Joined: 11 Nov 05 Posts: 25 Credit: 5,715,079 RAC: 8 |
I missed this in the first reading, and it's important. I agree, same thing I was trying to say but better. The problem is implementation. You can't really tell if you've got drift until you run for a while and see the drift. We been running the new MB for how long now? I can't see how it would still be off after more than 3 months.The fact that cuda may be changing how much credit I get without it tells me somthing not right.(Not to bring up the benchmarks in the new ver. of BOINC that don't have anything to do with this at all.LOL) If a project sees that whatever it is doing (new client, credit adjustment, whatever) has caused a drift upward, some will be angry because their old credits aren't worth as much as new work. The credit are worth the same under both.The difference is under type 1 I would have to keep buying a new computer every so often to make the same amount.The credits in MY view should go up as new computer come online.I don't think that devalues the older credits,or the older computers that got them.Eveybody understands computers get faster all the time,so they should get more credits with every new computer.I can see in x amount of years with my new computer,the HAL9000 it spit out work so fast I would get over a billion credits a day,or the credits will keep being reworked and I'll still get the same as today. |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 66125 Credit: 55,293,173 RAC: 49 |
Oh I do get It: Macintosh? Only PCs here, I don't eat Apples or even own one for that matter. One's a quad and has a Cuda card and the Other is quad, But with no Cuda card, In any case I was talkin about a theoretical PC, My RAC is from the active PC, The other is off at the switch(I'd turned the PC off as I don't want to use It now for a few months, It came back on after the area suffered a power failure caused by lightning, So the PC is off at the switch now), At least until I can upgrade It more). The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
There is no reason to intentionally change the value of a credit. It is defined as: Take a computer that benchmarks 1,000 double-precision MIPS based on the Whetstone benchmark, and 1,000 VAX MIPS based on the Dhrystone benchmark. By definition, that computer should get exactly 100 credits per day. That is the standard. Every combination of projects and work units should get 100 credits per day. Credit adjustment is exceedingly unpopular. I would suggest that, while this is the right thing to do, that it any fix is likely to be nearly impossible without making a lot of users angry. ... as we've seen in this thread. |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
Oh I do get It: As I said, I don't own the Macintosh. Someone else does. |
Terror Australis Send message Joined: 14 Feb 04 Posts: 1817 Credit: 262,693,308 RAC: 44 |
IT'S ALL A CONSPIRACY ! By decreasing the number of credits per unit, the admins are putting psychological pressure on us to increase our crunching power (ie faster computers, more CUDA cards etc.) in order to keep our RAC's up. This then provides the project with more GFlops of crunching power so they can play "Mine's still bigger than yours" at BOINC project admins get togethers. Mwahahahahaha |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.