Message boards :
News :
SETI@home now supports Intel GPUs
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5
Author | Message |
---|---|
![]() ![]() Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 ![]() ![]() |
You've got it backwards. Calibrating for the best optimized version reduces the credit grants for all versions and penalizes optimizations. Calibrating for the worst increases the credit grants for all versions. It what discussed already - it's counter-productive for stock optimization. and no, we never calibrated on anynymous platform app. And stock app never was fastest. Credit will be reduced if stock will not be changed to include already implemented optimizations. Hence pressure from credit system to make stock efficient. Currently we see big disagreement between MB and AP credit. Why? Because stock MB is quite optimised app and stock AP... well, I will be polite :). Hence huge overpay for both CPU opt AP and GPU AP both stock and opt. The same with other projects. If some project has crappy app and some releases adequately optimized app then almost all go to anonymous platform and enjoy enormous credits. And server ignores times from anonymous platform so no re-calibration happens. That's how it works now. SETI apps news We're not gonna fight them. We're gonna transcend them. |
![]() ![]() Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 ![]() ![]() |
To sacrify project performance for pleasing credit system? Well, I would not consider it as good move. SETI apps news We're not gonna fight them. We're gonna transcend them. |
kittyman ![]() ![]() ![]() ![]() Send message Joined: 9 Jul 00 Posts: 51527 Credit: 1,018,363,574 RAC: 1,004 ![]() ![]() |
To sacrify project performance for pleasing credit system? Well, I would not consider it as good move. Just my opinion, but I do not think that Seti has sacrificed project performance in any manner to 'please the credit system'. I believe the apps they choose to supply as stock are chosen more to assure maximum compatibility with the full range of computer systems that volunteers may wish to attach, regardless of that volunteer's computer skills. They have to be plug and play even on computers with modest capabilities. Eric may have other comments, but I wanted to respond with my thoughts. He seems to be saying that the credit system should work correctly with the stock apps, which would then reward optimized apps for any increased performance, not the other way around. "Time is simply the mechanism that keeps everything from happening all at once." ![]() |
Eric Korpela ![]() Send message Joined: 3 Apr 99 Posts: 1383 Credit: 54,506,847 RAC: 60 ![]() ![]() |
So if they are on CreditNew they need to keep the slowest app for 'calibration'? ;) Yes, actually. If the speed difference is large (and if the server works as advertised) then the slow version should be sent only very rarely. Of course the second part of that if is the hard part. The assumption of course, is that a slow app is representative of the benchmark speeds. Of course, that's never the case. The benchmarks will always be faster than any real single threaded SISD app. That's what the old "credit multiplier" was for. @SETIEric@qoto.org (Mastodon) ![]() |
![]() ![]() Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 ![]() |
The assumption of course, is that a slow app is representative of the benchmark speeds. Of course, that's never the case. The benchmarks will always be faster than any real single threaded SISD app. That's what the old "credit multiplier" was for. Hmmm, yeah. Whetstone should be defeating certain compiler optimisations by design, but likely not pressure the memory hierarchy much (if at all). That would, for old FPU at least, place real dataset size processing throughput much lower, closer to memory bandwidth minus stalls, due to small caches, some unfavourable access strides, and limited prefetch capability except where hardcoded. On the other hand since memory speed back then was relatively closer to CPU core speed (fewer cache levels needed), that was probably slightly fairer from one angle. Anyway I suppose it's all another take-away indicator, Boinc Whetstone might not have been the right choice even for pre-SIMD (Which I hadn't considered yet). Well at least we know pfc_scale and host_scales reach impossibly low values for genuine reasons. I guess poking this many holes in the basic scaling should result in fair solutions arriving eventually. For the time being I'm still of the opinion that if scale < 1 then it indicates parallelism ( of either SIMD or mis-summed multithreading form ) which could then be inverted. That would have effects of declaring original estimates as a minimum possible (which it is here), and converge on the cobblestone scale, but I'm certain there are angles I haven't though of yet. I'm not sure taking the scaling reference from server side & placing it on the quite jiggerable Boinc Whetstone was a good move, despite that I guess the intent would have been to compensate for really bad initial estimates at various projects. The instabilities are something I can certainly help fix, but the base scaling would require decisions as to which end is 'more right'. Here I believe the fpop estimates are relatively good, compared to the proportion of erratic behaviour in other areas. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
![]() ![]() Send message Joined: 28 Oct 09 Posts: 34065 Credit: 18,883,157 RAC: 18 ![]() ![]() |
|
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.