Message boards :
Number crunching :
Average Credit Decreasing?
Message board moderation
Previous · 1 . . . 26 · 27 · 28 · 29 · 30 · 31 · 32 · Next
Author | Message |
---|---|
BilBg Send message Joined: 27 May 07 Posts: 3720 Credit: 9,385,827 RAC: 0 |
dev builds: - are first distributed as "alpha" between limited number of testers (I'm not one of them) - then released for general (but Advanced) public as "standalone packages" (one app in .7z for manual "install") for beta test by everybody who feels as "Advanced user" - then included in Lunatics installer and given to Eric Korpela to test on "SETI@home beta" site Only after that they are "promoted" as Stock (distributed automatically here) "Stock" you are running are the same "Lunatics apps" only a few months older. Â - ALF - "Find out what you don't do well ..... then don't do it!" :) Â |
KLiK Send message Joined: 31 Mar 14 Posts: 1304 Credit: 22,994,597 RAC: 60 |
well, v8.12 doesn't work on "low budget GPUs" still! it starts loading & the it goes to 0,001% (or similar percentage) & stick there...until the Deadline comes! every few days I have to clean the BOINC from SoG & sah WUs from that system... maybe I should delete the apps instead? :/ non-profit org. Play4Life in Zagreb, Croatia, EU |
Kiska Send message Joined: 31 Mar 12 Posts: 302 Credit: 3,067,762 RAC: 0 |
Won't work, BOINC will automatically redownload them |
Wiggo Send message Joined: 24 Jan 00 Posts: 36390 Credit: 261,360,520 RAC: 489 |
well, v8.12 doesn't work on "low budget GPUs" still! I'm receiving the same feedback here, highend or later version Nvidia GPU's do fairly well with SoG/OpenCL apps, but a lot of users with older mid to lower range Nvidia GPU's usually just get lock ups, the app just stops working or it just errors out and that is just making a lot of users just want to give it up here (I won't get into all those useless Mac's out there producing nothing but crap). :-( I'm sorry, but there was no where near enough testing carried on older mid to lower end Nvidia GPU's with these OpenCL based Nvidia apps IMHO (by an extremely long way) in beta before being being dumped on the poor, unenlightened and unsuspecting general users amongst us. You must remember that there are very few of us advanced users out there (only1% of us here can be regarded as advanced users) and any apps released here must suit the general user (99% of us) that doesn't know where to go for help and just not those who know how to find out how to tune these apps so that they may work properly. I also still believe that Arecibo GPU apps should be treated as a separated entity from GBT GPU apps so that users could chose the best app to suit those "common users'" computers. Cheers. |
BilBg Send message Joined: 27 May 07 Posts: 3720 Credit: 9,385,827 RAC: 0 |
well, v8.12 doesn't work on "low budget GPUs" still! I don't understand "still" (?) Do you expect that somehow the app will change (in code) but still be marked with version 8.12 ?? If/when they (at lab) decide to change the app they are forced to change the version #, else BOINC will not Download the new app. So only of you see something like 8.13+ it may be changed app (but not necessarily - sometimes they post new version to fix other things (e.g. distribution to "proper" hosts), not the code) http://setiathome.berkeley.edu/apps.php I think the code to fix "low budget GPUs" issues is already present in the current builds (r3500+) but Raistmer have to tell if Yes or No. You can test by running "Lunatics v0.45 - Beta4" installer and selecting "SoG" http://setiathome.berkeley.edu/forum_thread.php?id=79704&sort_style=6&start=0 Or wait (a few months?) for Eric to post some new version - which (the executable) will be the same or newer than what is in the above installer (r3500) but written by the same programmer (Raistmer) As you can see what you run now (and call "v8.12") is: "OpenCL version by Raistmer, r3430" http://setiathome.berkeley.edu/result.php?resultid=5117339310 Â - ALF - "Find out what you don't do well ..... then don't do it!" :) Â |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
Well, on GT720 it works quite OK. And it's low-end device with only 1 CU. I just have nothing less capable. So all who wanna even lower-end support better would join to beta (8.18 just deployed few minutes ago!) and test instead of filling BOINC's board database with whinning messages... (but I already said the same before...) SETI apps news We're not gonna fight them. We're gonna transcend them. |
rcthardcore Send message Joined: 23 Nov 08 Posts: 48 Credit: 1,306,006 RAC: 0 |
PFFFFFFT. |
KLiK Send message Joined: 31 Mar 14 Posts: 1304 Credit: 22,994,597 RAC: 60 |
well, v8.12 doesn't work on "low budget GPUs" still! Again, normal user will not want to bother with installing Lunatics! Neither do I want to install another program on computer, only to update it again soon! BOINC is enough to be used for GPU crunching SETi@home...anybody saying that's not enough, need to recheck its compass on "user friendly app for end user"! ;)
Will add GT520 to the mix, when I get home! ;) non-profit org. Play4Life in Zagreb, Croatia, EU |
KLiK Send message Joined: 31 Mar 14 Posts: 1304 Credit: 22,994,597 RAC: 60 |
Also a lot of error on mid-range Quadro2000 on sah & SoG apps/WUs: https://setiathome.berkeley.edu/results.php?hostid=7264653&offset=0&show_names=0&state=6&appid= Just had to abandon 2 jobs which were 3+h crunching! :/ using app_config.xml with </app_version> <app_version> <app_name>setiathome_v8</app_name> <plan_class>opencl_nvidia_sah</plan_class> <non_cpu_intensive>1</non_cpu_intensive> <ngpus>0.5</ngpus> <cmdline>-use_sleep_ex 1 -sbs 256 -v 6 -period_iterations_num 100</cmdline> </app_version> <app_version> <app_name>setiathome_v8</app_name> <plan_class>opencl_nvidia_SoG</plan_class> <non_cpu_intensive>1</non_cpu_intensive> <ngpus>0.5</ngpus> <cmdline>-use_sleep_ex 1 -sbs 256 -v 6 -period_iterations_num 100</cmdline> </app_version> non-profit org. Play4Life in Zagreb, Croatia, EU |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14674 Credit: 200,643,578 RAC: 874 |
I've just come across a thought-provoking post by a relatively new Slovakian cruncher called Thomas Brada at GPUGrid - GPUGrid message 44499. He's posted an assessment of the estimated computational work done by various projects - according to the BOINC client's internal REC calculations - and compared it with the credit awarded by the project, sometimes manually and sometimes by CreditNew. The table is BOINC Pootis, and on his calculation - third table - SETI's credit awards come bottom of the two dozen or so projects he's assessed (including most of the best-known ones). There's no detail on methodology, but the calculation is interesting, and I'll try to get our local credit-walkers interested in it in the morning. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
There's no detail on methodology, but the calculation is interesting, and I'll try to get our local credit-walkers interested in it in the morning. I can make some pretty educated guesses for here, based on the REC end, given REC is based on cobblestones and the creditNew one isn't (it's similar then additionally normalised to stock CPU app). With Pre-CreditNew (MB only, Arecibo, no GPUs, fpops 'çounted') numbers being ~cobblestone scale, Arecibo MB only numbers under v6 with CreditNew yielded ~30% (SIMD enabled apps moved into stock, with no allowance for the 'free' flops) [Naturally they aren't 'free', they still cost power, dissipate heat and perform computational work, so should be paid out, but I digress] Since that time, a further factor of 9 reduction in payout (bring it down around ~3.4% yield) wouldn't surprise me (even though certainly shocking to see it presented clearly, accurate or not the order of magnitude being probably good enough to say it's broken...) . Of that further factor of 9, ~3-4 times is plausibly due to a combination of AVX introduction into stock, combined with increased retirement of old Pre ~SSE2 machines. The remaining ~2-3 times possibly A general increase in throughput of the dominant normalisation reference through architectural improvements within the dominant pool of CPU results. What triggers my intuition that this ~30x combined drop since before creditNew may be well and truly in the ballpark, is the yield of my Mac Pro CPUs compared to my recollection of before CreditNew and GPUs, Alex Kan's Mac Pro sitting at #1 position with over 20k RAC. I don't even bother using the CPUs on my quite similar Mac Pro, as the payout is easily less than a tenth of that. These observations might be falsifiable/confirmable if the likes of BoinStats or similar data goes back far enough. My feeling there is that project/Boinc staff may possibly have interpreted technological advance (Moore's Law + Optimisation) as Credit Inflation, rather than what it was. Increased throughput. To my mind, given forced client updates to correct Whetstone for SIMD are unlikely to succeed en masse, and a 10x-30x correction unlikely, I maintain the simplest 'fair' correction would be for the server to use the known processor features ( <p_features> ) to derive a SIMD correction factor while computing pfc for estimates and claims. This at the very least should be designed to bring the compute efficiency claims of the CPU applications down from their Well over 100% claims to the more realistic ~20%. Spot values for starting corrections for SIMD can be readily derived by comparing Boinc Whetstone to SisSoft Sandra Lite Single threaded Whetstone (AVX or SSE as applicable). Bringing RAC to match REC wouldn't be a requirement, but reversing some of the ongoing depreciation, and having some illustrative quality metric to do so, IMO would be 'fairer', and reduce a lot of the estimate boundary problems that occur with new hosts. Cobblestones on a given host, with enough power data, can be converted to kWh energy and so $. If anything the cost per cobblestone has doubled or tripled for me over time (electricity cost), so cobblestones, if not stayed the same value, should have risen in payout over time, not dropped. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14674 Credit: 200,643,578 RAC: 874 |
Interesting side-observation on REC, arising from some testing I'm doing at Beta (we've rather hijacked the current Beta News thread). I was observing that a Haswell intel_gpu seemed - according to REC - to be out-performing all four CPU cores combined. The figures are: iGPU (SETI MB OpenCL): REC 7100 CPUs x4 (NumberFields integer math): REC 3200 Both figures are still stabilising and diverging - SETI increasing, NF decreasing. But then I saw that NF is calculating an APR of 0.32 GFLOPS. I'm pretty sure that is caused by a bad <rsc_fpops_est> choice by the project: I'd like to be sure that we understand whether and how that might distort REC, too. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
But then I saw that NF is calculating an APR of 0.32 GFLOPS. I'm pretty sure that is caused by a bad <rsc_fpops_est> choice by the project: I'd like to be sure that we understand whether and how that might distort REC, too. Yes, inappropriate <rsc_fpops_est> choice will affect REC calculation. Correct <rsc_fpops_est> choice statistically should fall on the median of actual computational throughput (Some % of actual peak throughput, as opposed to Boinc Whetstone as used). That's how counting the operations works, but in 'reverse' from a classical model. Fortunately estimation almost works (despite incorrect Boinc Whetstone) because the actual runtime will be pretty variable based on runtime conditions, so buried in variance. For predictive modelling: Counted prediction from a pure model --> Classical forward modelling, aesthetic Statistical prediction model based on observation --> afterwards, synthetic Reality --> suggested to be a fusion of the synthetic aesthetic (aka Engineering) That gives us hints as to where the main problems lie: In the backwards classical implementation using averages, with the observations being bent to the model instead of the other way around. i.e.: Current method for predicting elapsed time: - task uses this many operations <rsc_fpops_est> - host does it in X operations using peak_flops (aka Boinc Whetstone, which is broken) - real hosts acheive >100% efficiency (defying the laws of physics and breaking the model), gives more or less correct time for host, but inflated efficiency figure - Use inflated efficiency figure to scale everyone else down. Simplified model [which is fine] : - task uses this many operations <rsc_fpops_est> - [Predict] the execution time based on some model (accuracy unimportant but useful). The model just represents compute efficiency. - host does it in X operations - [Update] the model using the variance The breakage in the first case is in the way Boinc Whetstone is injected, and this complicates the otherwise simple mechanism. Longer term, the fixes are to propagate host data back to appversion, back to app, then to project, for global estimates, and for making predictions use the most local estimate (e.g. new host with known appversion ? use the accumulated summary estimate for that appversion as a starting estimate) Sure Datamining can be for good if not in the wrong hands. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
If we join together in protest, perhaps we will be taken more seriously! I want to crunch for SETI, BUT, cannot justify the electric usage with CreditNew granting such low credit for 15 Hours of daily computations on two computers. I know MANY of you have more systems than I; so, I KNOW that if these systems STOP crunching SETI and go elsewhere for ONE MONTH, the powers that be WILL take notice! . . Actually he isn't. But most of us have simply grown tired of complaining about what is not changing. But as Jason has told us, there is some action happening, it is just a slow process, which in the long run will hopefully yield better results. It is far better if the solution is one that works consistently and will accommodate future changes. We have yet to see what will happen when Parkes comes online and what their data format will be. Though I am sure the guys behind the curtain have seen what they will be dealing with. But a great many of us would like a better rating system to have confidence that the numbers accurately reflect what we are spending our money to achieve. Stephen |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Now, as to ANYONE ELSE! . . Hi TBar, . . What new app is that? As far as I know the only new app available to the rank and file is SoG (which Raistmer is doing some excellent work on) but while the results are impressive and an improvement, they are certainly not double previous results. Stephen . |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
TL, most everyone knows creditnew in it's present form is screwed but in spite of that the 12k RAC I get here is worth exactly as much as the 90K RAC I get over at Einstein, exactly nothing. . . Worth in dollar value? Very true. . . But there would have to be a reason for having a credit system at all or it would not be there. It is part of the evaluation system that the schedulers use to assign work. So while ever it is inconsistent they are not working at optimum either. But you are also ignoring the "warm fuzzies", that has a value, and if you doubt that consider the large numbers of contributors that are pulling the plug on Seti since the introduction of V8/Greenbank WUs. I think you are underestimating the loss to the project in resources that this represents. And here you guys are telling another contributor that if he disagrees with something he should leave too. Well done guys. Stephen . |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Hi, . . Stirrer. So when will your special app be ready for Beta? Stephen . |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Hi, . . Hi Al, . . As Tim Brook-Taylor? (Home Improvements) says ... MORE POWER! hurgh hurgh hurgh! Stephen . |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
. . Hi Klik, . . While I cannot speak of the performance of the GT520 I can say that SoG r3500 gets results that are the equal of CUDA50 on my GT730. So maybe you need to consider Lunatics and try out the r3500 release. Hopefully soon r3528 will be available and it has some slight improvements again. . . Go on, give it a try :) . . You might be surprised when your numbers improve. :) Stephen . |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
It's dark arts maths of the finest unfathomables, people, and I'd snort at it less if it said so in my user blurb . . That is because, while it is largely in English, it highly stylised to the point of near incoherence. It requires a sense of the dramatic and the irreverent (ie sarcasm) to understand much of it. But it might take too long to translate :) Stephen . |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.