Message boards :
Number crunching :
New Credit Adjustment?
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 . . . 17 · Next
Author | Message |
---|---|
JDWhale Send message Joined: 6 Apr 99 Posts: 921 Credit: 21,935,817 RAC: 3 |
Only ran one WU on the comp that was consistantly lower. This time my comp had the higher return as compared to my wingman. Think those CPU benchmarks have something to do with it. If you're like me and never shut down, or load BOINC at as soon as you boot while other system processes are loading, it can throw off the speed SETI sees your comp crunching at. I guess it can fluctuate. The "credit claim" depends on which version of the "scheduler" Berkeley is running at the time the WUs are returned. They have switched back and forth between schedulers a few times in the past couple days. |
DaBrat and DaBear Send message Joined: 13 Dec 00 Posts: 69 Credit: 191,564 RAC: 0 |
Funny because each comp is granted credit in a certain amount depending on how CPU benchmarks, or how long the tack should last. A certain number of credits for each CPU hour. IF your CPU is reporting that it is slower than it actually is seems that it would return a lower result. Don't know if it is correct but the experiment worked on three comps. Maybe soemone else should try it. Luckily none went to pending. ISn't that were the task duration correction factor comes in? These WUs were run within minutes of each other.. the before and after |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
Funny because each comp is granted credit in a certain amount depending on how CPU benchmarks, or how long the tack should last. A certain number of credits for each CPU hour. IF your CPU is reporting that it is slower than it actually is seems that it would return a lower result. Credit has nothing to do with benchmarks, unless you are still running a 4.x version of BOINC. The science application returns an estimate of the actual number of Floating Point Operations done by your computer, and credit is based on that. This should be quite consistent between different machines. ... and the reason for the variance has been explained, by the project scientist, on this thread. |
JDWhale Send message Joined: 6 Apr 99 Posts: 921 Credit: 21,935,817 RAC: 3 |
ISn't that were the task duration correction factor comes in? The duration correction factor (DCF) is used to calculate estimated CPU times to run WUs and to calculate the amount of work to request from project servers. No effect on credit claim/granting. The difference you noticed between credit claims between your host and your wingman is due to Berkeley changing scheduler programs on the project servers. Last night (Berkeley time) they ran a different version/different parameters... then this morning they switched back to the old scheduler/parameters. WUs returned while running the "new" scheduler have lower credit claims. The new scheduler is having issues sending work to some Apple MAC computers, some other "side effects" have also been reported. |
RottenMutt Send message Joined: 15 Mar 01 Posts: 1011 Credit: 230,314,058 RAC: 0 |
maybe now they are giving more credit for a fast internet connection;P :joke: |
DaBrat and DaBear Send message Joined: 13 Dec 00 Posts: 69 Credit: 191,564 RAC: 0 |
Funny because each comp is granted credit in a certain amount depending on how CPU benchmarks, or how long the tack should last. A certain number of credits for each CPU hour. IF your CPU is reporting that it is slower than it actually is seems that it would return a lower result. Ok so it was a fluke that after updating the cpu benchmarks on the comps happened just before the credit started to seem normal instead of ten points off. Can you tell me what he basis for the estimate of the FLOPS are based on? measured speed over what to get estimated flops based on what? Yes they should be consistant depending on what the estimate is based on.. could you clarify that? Sorry but I can only see that estimate being valid if the measured CPU speed is taken into account. If the measured CPU speed is off, it would seem to invalidate it. Maybe I am not getting something looking at it from a comp perspective. I can really see nothing else to base it on other than the speed of the unit and the estimated time it should take to crunch the unit based on said speed efficiency...etc. Since each comp will vary in the above factors I understand you to be saying that the measured speed of each unit has nothing to do with it... that would be odd., But please do explain. If the credit for any particular workunit is predetermined there would not be so much variation in results returned in WU sent out virtually at the same time.... Sorry but I am getting confused on this one. I think I am to understand that the measured speed of the cpu you are using has nothing to do with it even if it is off. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
...Can you tell me what he basis for the estimate of the FLOPS are based on? measured speed over what to get estimated flops based on what?... Sure: By my understanding (loosely) On the number of floating point operations (basis in algorithmic complexity, and numerical methods) that the operations performed are supposed to take. For the kind of operations performed here, it isn't really an estimate but a [fpop] count that is incremented as each section of processing is performed. for example, an addition would be one flop.[correction fpop] a multiplication also, one flop. [correction fpop] a complex number addition I think 3, 4 or six (can't remember, probably 4) a fast fourier transform, depending on chosen type & size, nlogn That sort of thing. BTW, in case you're wondering current optimised apps still do all those flops [correction fpops], they just tend to pick methods that leverage instruction level parallelism and cache more effectively, so get more higher flops. [fpops per second] Jason. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13841 Credit: 208,696,464 RAC: 304 |
Sorry but I can only see that estimate being valid if the measured CPU speed is taken into account. If the measured CPU speed is off, it would seem to invalidate it. Maybe I am not getting something looking at it from a comp perspective. The speed of the CPU makes no difference- the number of operations are still the same, they just take more or less time to perform depending on the CPU doing the work. Grant Darwin NT |
DaBrat and DaBear Send message Joined: 13 Dec 00 Posts: 69 Credit: 191,564 RAC: 0 |
Sorry but I can only see that estimate being valid if the measured CPU speed is taken into account. If the measured CPU speed is off, it would seem to invalidate it. Maybe I am not getting something looking at it from a comp perspective. Been trying to look into it to get a better understand but everything I read on how FLOPS are calculated revolve around the CPU speed, efficiency which reverts back to the task duration factor. You are right but if your CPU is in the data base at 50 million ops sec slower than it actually is, your FLOPS are being calcualted at that rate... correct? A CPU with 1000 op/sec compared to 1050 would take longer to crunch the same work. so if you are running the faster version but you are being read as the slower version, it MAY report that there is actually less work being done until the speed is corrected correct? Who knows how the CPU in the data base is reached or when it was last read.. maybe when MacAfee was running? CPU benchmarks don't change in the database for each unit until they are updated either by the system at boot of the program of manually and clicking update. So if you are running the 50/sec faster unit but it is being calculated at the slower unit's speed, it would look like a slower unit parsing the work in that time frame resulting in a lower granted credit. Sorry but thats what I am getting from the way that Flops are calculated. I 1000/sec machine only does so many FLOPS in a certain time period. Dont take me sderiously just trying to get clarification on exactly how FLOPS are calculated and if it is based on CPU speed and it is listed wrong, there are no two ways to look at it except that this would effect it. |
DaBrat and DaBear Send message Joined: 13 Dec 00 Posts: 69 Credit: 191,564 RAC: 0 |
...Can you tell me what he basis for the estimate of the FLOPS are based on? measured speed over what to get estimated flops based on what?... cool |
kittyman Send message Joined: 9 Jul 00 Posts: 51477 Credit: 1,018,363,574 RAC: 1,004 |
I would still stump for a new stat.......let's call it temporary RAC or instant RAC....... Would not replace the current stat, but would show a bit more current state of affairs......maybe a one week rolling average or even a 24 hour rolling average as opposed to 4 weeks...... Would be pretty nifty for us RAC obsessed folks. "Time is simply the mechanism that keeps everything from happening all at once." |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
coolSmall correction to what I wrote, in some places I should say FPOPS (floating point operations). So the apps count fpops, different apps/cpus do the same fpops in a shorter period, which results in higher flops. (e.g. my p4 hyperthreaded gets about 2.8 GFlops, while my core2duo gets about 7.4 GFlops) pop..pop.. poppity pop (now got that popcorn tune stuck in my head :S) "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
DaBrat and DaBear Send message Joined: 13 Dec 00 Posts: 69 Credit: 191,564 RAC: 0 |
coolSmall correction to what I wrote, in some places I should say FPOPS (floating point operations. lol |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
Let us take the following hypothetical bit of code. for(i=0;i<1000000;i++) { r=2.5*i; fpops++; } The value of fpops increases by one for each floating point operation, at the expense of 1 million (presumably integer) adds for "accounting". Instead: for(i=0,i<1000000;i++)r=2.5*i; fpops=fpops+1000000; Gives the same exact result. If however, the loop is more complex, and you don't want to simply "add one" for every floating point op (making the science application quite a bit slower), you estimate, on average, how many actual floating point operations are done in each loop, multiply by the number of passes through the loop, and add it to fpops. At the end of the calculations, the science application has an accumulated number of operations based on this estimate. It is then multiplied by 2.85 to bring it into line with the older science application. Unless you are running a really old version of BOINC that doesn't return flop counts, this is how credits are calculated. There is no way to know how many floating point operations are needed before the work unit is crunched. Noisy work units finish very quickly -- which means very few floating point operations. The BOINC benchmark is used to get a rough estimate of speed. This, and the Duration Correcton Factor (which is the ratio of the Benchmark "guess" to measured reality) is only used for work fetch: If SETI estimates that a work unit will take 3 hours, and your DCF is somewhere around 0.3, BOINC can get three times the estimated work with no risk. |
DaBrat and DaBear Send message Joined: 13 Dec 00 Posts: 69 Credit: 191,564 RAC: 0 |
THanks for that in depth explanation. |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
Been trying to look into it to get a better understand but everything I read on how FLOPS are calculated revolve around the CPU speed, efficiency which reverts back to the task duration factor. CPU speed is a factor. Let us assume that you have a Core2 Solo (one core) at 1.6 GHz, and a Core2 Solo (one core) at 2.0 GHz. We can definitely say that the second processor can do 25% more work, allother things being equal (they rarely are). When we compare a Core2 Solo against a Pentium 4 at the same clock rate, things are very different. The Core2 processor simply gets more work done per clock. Comparing AMD and Intel makes it even more interesting. Then we start talking about how well the rest of the computer can deliver work to the CPU -- it doesn't matter how fast the CPU can process if it has to wait for data. This leads to discussions of bus speed and cache sizes. In other words, going solely by clock speed hasn't been accurate since we moved from the 8088 to the 80286. This supports Jason's comments on optimization: sometimes you rearrange things to deliver the data faster, and sometimes you use instructions that are in a Core2 but not in a Pentium III, but the same amount of work gets done. |
DaBrat and DaBear Send message Joined: 13 Dec 00 Posts: 69 Credit: 191,564 RAC: 0 |
Oh no my comments never were that clock speed was the single factor. Only that if clock speed is interpreted erroneously, it might throw off the number of credits granted for any particular WU. |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
Oh no my comments never were that clock speed was the single factor. Only that if clock speed is interpreted erroneously, it might throw off the number of credits granted for any particular WU. Clock speed is not part of the credit calculation. |
DaBrat and DaBear Send message Joined: 13 Dec 00 Posts: 69 Credit: 191,564 RAC: 0 |
Oh no my comments never were that clock speed was the single factor. Only that if clock speed is interpreted erroneously, it might throw off the number of credits granted for any particular WU. Thanks |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
Eric added the adjustment mechanism to BOINC with changeset [trac]changeset:15661[/trac]. In a post to the boinc_dev mailing list thread on cross-project credits he said that after 2.8 days it had adjusted S@H credit multiplier from 1.0 down to 0.978. It keeps a 30 day history of the multiplier. Credit claims are calculated based on when the work was "sent" rather than when it is returned, so in most cases the two hosts with initial replication tasks will be claiming using the same multiplier. The method does involve benchmarks, but uses statistics over the last 10000 results so individual benchmarks will have negligible effect. The goal is that overall project granted credits should be equivalent to Cobblestones. Eric thinks the multiplier would get down to around 0.85 after 30 days with current apps, though he of course hopes to release the 6.0x app sooner than that. 6.02 is not much different in speed from 5.27 so the trend will probably not change much. Joe |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.