Message boards :
Number crunching :
Calibrating Client v5.3.12.tx37 - fair credits in all projects
Message board moderation
Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · 12 · Next
Author | Message |
---|---|
trux Send message Joined: 6 Feb 01 Posts: 344 Credit: 1,127,051 RAC: 0 |
Markus, if you read the description on the download page, the readme file, or couple of posts here, it clearly states you need to give it some time. I speak about a dozen or two of WU's. Sometimes it may be even longer. I think I am going to put the sentence in red bold :) I did not compile the Linux binary, and AFAIK it was compiled with generic settings, so no optimizing at all - you should preferable recompile it yourself optimized for your specific box. Then you'll have the basic Claimed Credit higher eve if the calibration does not make its work yet. trux BOINC software Freediving Team Czech Republic |
MSc Send message Joined: 14 Jun 99 Posts: 2 Credit: 1,376,100 RAC: 0 |
Hi trux, sorry for being impatient. – The grouchy kid with a new toy! I thought a few hours / units have to be enough, now my knowledge is improved; things are going in the expected direction. As already mentioned in this thread earlier (hopefully it will not turn into a threat) a regular restart of the core client is not perfectly supporting the idea in this build. The change is already on the wish list, so job done. I noticed another bit, which wasn’t cover before (or I missed / misunderstood it). Based on the time correction a host may now work more than 100% for a project: CC calibration: 6.90 >> 28.18 (time: 3605s >> 5436s / Gfpops: 1.11 >> 6.76) Will this be accepted by the projects? About compiling my own optimized Linux clients, I’ll give it a try, but I’m a “[ctrl]-c, [ctrl]-v, [enter], pray, look like Scooby-Doo developerâ€. ;-) An area to learn and improve. Bye Markus |
krgm Send message Joined: 2 Jun 05 Posts: 30 Credit: 72,152 RAC: 0 |
Thanks for all your work so far, This client has averaged out all my computers nicely. (one was way too high, another too low). I have noticed a bug with the process priority however. I run seti 90%, einstein 10%, with process priority to seti and a .5 cache. As long as there is at least one einstein work unit, the computer runs 90% seti, 10% einstein, and it brings in enough seti to keep about 1/2 day cache. So far so good. Then, it eventually runs out of work for einstein. About a day after running out of einstein, it stops bringing in new work for seti. When the work for seti runs out, it asks einstein for 1 second of work, runs that for 2-3 hours, then finally asks seti for 1/2 day of work. |
trux Send message Joined: 6 Feb 01 Posts: 344 Credit: 1,127,051 RAC: 0 |
Thanks for the comments, but to save the time of all of us, I recommend that you read also the buglist on my website. I shows a bug in tx36 fixed in tx41, ranked as SERIOUS and telling: "decreasing cache at priority projects due backup ltd". If you experience problems with it, simply turn off the "priority_projects" option in the time being before the new version is released. If you read the buglist before submitting a bug report, it lets me more time for working on the client :) trux BOINC software Freediving Team Czech Republic |
Sir Ulli Send message Joined: 21 Oct 99 Posts: 2246 Credit: 6,136,250 RAC: 0 |
btw it is not fair, that an P4 with HT gives all WUs a credit of 32... the P4s are fast, very fast and when i look at my P4 http://setiathome.berkeley.edu/results.php?hostid=56765&offset=120 it the last at the claimed credit, but will becomme the best granted credit so the lower Computers are looking at the Dark i think only for Info Greetings from Germany NRW Ulli |
trux Send message Joined: 6 Feb 01 Posts: 344 Credit: 1,127,051 RAC: 0 |
btw it is not fair, that an P4 with HT gives all WUs a credit of 32...It looks like you misunderstood both the purpose of credits, and the purpose of the calibration. That's not bad, though - it is a usual misinterpetation. So let me explain it: 1) The client does not claim 32 for all WU's. We wouldn't need any measurement if it was so, and could quietly come back to the S@H Classic system, where each WU had the same value regardless of its real lenght 2) If you take a WU, regardless what computer you take to compute it and regardless how fast you do it, theoretically, all computers should claim (an be granted) the same credit for this WU. Although the credit may vary at different units, theoretically it should not vary with hosts. Unfortunately, due to very limited efficiency of the benchmark measuring, credits based only on them are practically never on the theoretical level. It is the case only at older computers with inefficient CPU's and the official S@H application and even that very much varies with time and conditions. You may want to read more about credits in WiKi trux BOINC software Freediving Team Czech Republic |
Sir Ulli Send message Joined: 21 Oct 99 Posts: 2246 Credit: 6,136,250 RAC: 0 |
if it this right at P4 3.2 http://setiathome.berkeley.edu/results.php?hostid=56765&offset=120 get only 9 claimed credit and with your Client 32 granted credit my P4 is this for Info the P4 get the lowest claimed credit, but with your BOINC Client all host get 32 granted credit, is this right we have also to look at the People who are not running the newest Hardware for Info, an P4 3.2 like mine take at 45 WUs for 2 WUs msuch a Credit 64 x 32 = 2048 Credits a Day for Info Greetings from Germany NRW Ulli |
trux Send message Joined: 6 Feb 01 Posts: 344 Credit: 1,127,051 RAC: 0 |
Yes, that's right. Did you look at the WiKi link I sent? It is explained there in more details - I really recommend cecking it out. Cedit per WU is (at least in theory, or with the calibration) entirely hardware independent. trux BOINC software Freediving Team Czech Republic |
Skip Davis Send message Joined: 22 Dec 00 Posts: 44 Credit: 2,565,939 RAC: 0 |
If he thinks thats unfair, how about my x2 Athlon 64!??? Hehehehe! |
W-K 666 Send message Joined: 18 May 99 Posts: 19080 Credit: 40,757,560 RAC: 67 |
Question When using the calbrating client is it trying to adjust the claimed credits to 32.29 for the reference unit or for the average unit? |
trux Send message Joined: 6 Feb 01 Posts: 344 Credit: 1,127,051 RAC: 0 |
When using the calbrating client is it trying to adjust the claimed credits to 32.29 for the reference unit or for the average unit?It is little bit more complicated than that, but principally the most determinating are units with long processing time - it means those with the processing time practically equal to the reference WU. You can, of course, also launch the reference WU, to make sure, the calibration parsed at least one full-length WU, but in the real-life it is really unecessary. Besides it, the reference value is not 32.29, but 32.32 (see the formula) - though that's just a very minor difference, and most WU's will probably claim lower credit anyway. trux BOINC software Freediving Team Czech Republic |
W-K 666 Send message Joined: 18 May 99 Posts: 19080 Credit: 40,757,560 RAC: 67 |
When using the calbrating client is it trying to adjust the claimed credits to 32.29 for the reference unit or for the average unit?It is little bit more complicated than that, but principally the most determinating are units with long processing time - it means those with the processing time practically equal to the reference WU. You can, of course, also launch the reference WU, to make sure, the calibration parsed at least one full-length WU, but in the real-life it is really unecessary. Besides it, the reference value is not 32.29, but 32.32 (see the formula) - though that's just a very minor difference, and most WU's will probably claim lower credit anyway. I was asking because I have had the week off, left over from last year and they insisted, so have cleaned and maintained all seven computers in the house, and crunched the reference unit on all of them. Compared to the average times for each of them that crunches seti the reference unit takes between 25% and 30% longer. And therefore it struck me that maybe the claim by a lot of people that they should be getting 32.29 or .32 seems a little over the top. Before the optimised apps appeared I think the average granted was just over 25. |
trux Send message Joined: 6 Feb 01 Posts: 344 Credit: 1,127,051 RAC: 0 |
As I wrote, average times are irrelevant - you are right that they are definitelvely lower. What decides most are the max times. trux BOINC software Freediving Team Czech Republic |
skab Send message Joined: 13 Mar 03 Posts: 18 Credit: 2,874,929 RAC: 0 |
trux, on your download page I see a Linux option, is this for an calibrated client to run under linux? SETI, ONLY SETI, ALWAYS SETI!! |
Ingleside Send message Joined: 4 Feb 03 Posts: 1546 Credit: 15,832,022 RAC: 13 |
I think you misunderstood the purpose of the calibration and the way it works. When I speak about the value of 32.32 - that's the value of a unit that matches the estimated flops value (it means the unit the project developers based their estimation on). In reality, the value may differ at each unit, of course. The calibration does not mean that for every unit the credit 32.32 based on the estimated value is being claimed. That would not require any effort and would be exactly the same system as was used in S@H Classic. The algorithm is little bit more complicated. Been otherwise tied-up so late respons. The purpose of calibrating would be that if example 1 computer runs 100 Tasks each taking 10 hours to crunch, all these should claim N Cobblestones. If the same computer suddenly crunches a task taking only 1 hour, it will claim N/10 cobblestones. Another computer crunching the exact same Task should also claim the same "work done". Your calibrating-client tries to do both these things, but is making a mistake by claiming the long-running Seti-Tasks is worth 32.32 Cobblestones. Looking back, during BOINC beta-testing SETI@Home's rsc_fpops_est was 3.9e12, and Cobblestone_factor = 300, meaning worth 13.54 Cobblestones. After debug-code was turned-off, there was a huge jump in benchmarks, and this resulted in cobblestone_factor = 100 instead, and SETI@Home also set rsc_fpops_est = 2.79248e13, meaning worth 32.32 Cobblestones. Since the change in rsc_fpops_est is not the same as change in cobblestone_factor, and AFAIK not the change in benchmark either, my guess is Berkeley did some quick tests on the most common platform, windows, and checked cpu-time and benchmarks, and changed rsc_fpops_est so the estimated crunch-times was fairly close to the real cpu-time. But, it's a known fact v3.18-v4.19 windows-BOINC-clients had artificially high benchmark-scores, since part of the benchmark was optimized-away by compiler and was in reality never run at all. While the benchmarks was later changed to actually run all the tests, this resulted in lower scores for windows-computers, rsc_fpops_est is still the same. Lower benchmark means lower claimed credits, but also higher estimated crunch-times than reality. The last "official" client still giving artificially high benchmark-scores is v4.19, so as more and more users moves to later clients the claimed credits also drops. Looking on the average granted credit, it indicates 20-25 Cobblestones/task would be a more accurate calibration... |
trux Send message Joined: 6 Feb 01 Posts: 344 Credit: 1,127,051 RAC: 0 |
trux, on your download page I see a Linux option, is this for an calibrated client to run under linux?Sure, what else would you expect? I wouldn't call it Linux build, if it was for another OS :) trux BOINC software Freediving Team Czech Republic |
trux Send message Joined: 6 Feb 01 Posts: 344 Credit: 1,127,051 RAC: 0 |
...Looking on the average granted credit, it indicates 20-25 Cobblestones/task would be a more accurate calibration...Yes, it is true that the average will be lower, and it remains true also with the calibrating client. Significant portion of WU's will be either shorter due to different angle, or quite short due to a detected noise, hence the average claimed credit will be definitely lower than the ideal 32.32. However, there are always enough of long WU's among the common workunits that require as long processing time as the reference unit, hence helping to calibrate the correction factors to the right values. I compared many units with the reference unit processing time, on many different CPU's, so I am pretty sure of this claim. Still, you are right that the average claimed credit will be lower than the theoretic ideal. trux BOINC software Freediving Team Czech Republic |
Roy Collins Send message Joined: 12 Aug 99 Posts: 73 Credit: 53,671,192 RAC: 71 |
Could I make a small suggestion / request, Trux? On your main boinc web page, you have a link that says Calibrating BOINC core client for Windows (recommended) When you follow that link, you actually find downloads for the Windows client AND for the Linux client. I only found the Linux version by actually reading (!!) these message boards - happened to see a reference there. When looking at your page, I didn't even consider clicking the "Windows client" link to find a Linux client. Dunno WHY. :) Roy BTW - Great work, sir. Thank you. |
StokeyBob Send message Joined: 31 Aug 03 Posts: 848 Credit: 2,218,691 RAC: 0 |
I kept doing the same thing. I was looking all over the place. The Linux calibrating client could us a "truxoft_prefs.xml" file also. It may save some of us non programmer types some trouble getting it going. Same here. Great work, sir. Thank you. |
trux Send message Joined: 6 Feb 01 Posts: 344 Credit: 1,127,051 RAC: 0 |
When you follow that link, you actually find downloads for the Windows client AND for the Linux client.Hmm, the linux tar.gz file is right in the Download table, in the Linux column. I've just looked at it and it downloads just right. Or are you telling the tar.gz archve contains a Widows version instead? That would surprise me, but cannot exlude such a mistake. EDIT: ok, i see it now - you refer to the menu page (index.htm), not to the client page (core-cal.htm). Yes, sorry, I did not even notice it still referred just to Windows. I fix it, of course. (EDIT2 - done) The Linux calibrating client could us a "truxoft_prefs.xml" file also. It may save some of us non programmer types some trouble getting it going.The Linux client uses identical source code to the Windows version. It means it uses the truxoft_prefs.xml file of course too. Well, unlike the FreeBSD version, (and as stated on the download page) I did not compile this one alone, so cannot exclude Uftoun-Zmedelec (who compiled it) used oldfer version of my sources, but again it would very much surprise me. Afaik, it is the build tx36, hence it uses the truxoft_prefs.xml, of course too. trux BOINC software Freediving Team Czech Republic |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.