Message boards :
Number crunching :
Excessive "Time to compleation" estimate!
Message board moderation
Author | Message |
---|---|
Dorsai Send message Joined: 7 Sep 04 Posts: 474 Credit: 4,504,838 RAC: 0 |
Can anyone offer a suggestion as to why this has occurred? I realise it won't actually take 1200+ to finish, but it's as if the "duration correction factor" has been applied as a "multiplier" when it should have been a "divider"? Foamy is "Lord and Master". (Oh, + some Classic WUs too.) |
Dr. C.E.T.I. Send message Joined: 29 Feb 00 Posts: 16019 Credit: 794,685 RAC: 0 |
Can anyone offer a suggestion as to why this has occurred? eh Dorsai - You might 'un-hide' your computers so as to allow one to look @ the Issue [just a suggestion] and, Welcome back to the Boards btw . . . BOINC Wiki . . . Science Status Page . . . |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
Can anyone offer a suggestion as to why this has occurred? Recently, the most common cause of "duration correction factor" (DCF) being unreasonably high has been running VLAR WUs with the CUDA version of S@H Enhanced. But it can happen any time a WU runs much longer than it should, BOINC then thinks all subsequent work will also be very slow. Astropulse work occasionally has trouble restarting because the checkpoint file isn't totally reliable, then the work is restarted at the beginning which increases total runtime; if that happened a few times on one WU it could also drive DCF to the levels you're seeing. DCF was designed to make sure a host doesn't get more work than it can do within deadline, and was made very pessimistic. It's use in the estimate shown by BOINC Manager is basically just because they wanted to transition smoothly from that initial estimate to showing estimates based on progress. The transition is an even blend so at 26.411% the estimate is still almost 75% based on DCF. Joe |
Aurora Borealis Send message Joined: 14 Jan 01 Posts: 3075 Credit: 5,631,463 RAC: 0 |
Can anyone offer a suggestion as to why this has occurred? Normal DCF for Seti is under 1 for most modern systems. You could go into the client_state.xml file and edit the DCF to a more reasonable value. Just make sure Boinc and the applications have all stop first. |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
Can anyone offer a suggestion as to why this has occurred? Duration Correction Factor is, and should be, a multiplier. I'm just going to add one observation: It is far better for BOINC to over estimate run time (and not fetch new work) than it is to under-estimate and miss deadlines. For that reason, DCF tends to increase very quickly, and decrease slowly. You can track it down in your client_state.xml file and fix it, or you can just let it correct itself. |
MarkJ Send message Joined: 17 Feb 08 Posts: 1139 Credit: 80,854,192 RAC: 5 |
Part of the problem is BOINC only has one DCF per project. It doesn't maintain them by application. There is an enhancement request in to address this, but if/when it will happen is anybody's guess. In the mean time you will see the DCF will jump around depending on the last task it finished. If you are running all the Seti work it can vary between AP, AP_v5, MB (cpu) and MB (cuda), which is why the estimates are all over the place. BOINC blog |
Dorsai Send message Joined: 7 Sep 04 Posts: 474 Credit: 4,504,838 RAC: 0 |
Duration Correction Factor is, and should be, a multiplier. That is of it self not a problem. What I meant was that if the DCF was .5 (IE it will take half as long as estimated) then 20 x 0.5 is 10, but it looks like it is doing 20 / 0.5 which is 40. What seems to be happening is that every time I finish an astropulse WU faster than expected the next one I get is expected to take even longer. IE, Do a 100 hour one in 50 hours, (Half the time) and the next one is expected to take 200 hours (twice as long, rather than half as long!) So when I then do this 200 hour one in 50, the next one is expeted to take 4 times as long, and so on. Host in question is this: http://setiathome.berkeley.edu/show_host_detail.php?hostid=4026476 It's not causing any problems, as they take that long that I would not want a queue of work, and as soon as one is almost finished I get another, but it seems that the estimates for time are going geometrically up, rather than slowly going down. Foamy is "Lord and Master". (Oh, + some Classic WUs too.) |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
... BOINC 6.4.7 does estimates based on elapsed (wall) time, but there were further changes related to that in later versions. The elapsed time accounting is probably less stable than CPU time since it's more affected by other host activity. In any case 6.4.7 shows CPU time used and estimated time in wall time, which may be contributing some part of the apparent mismatch. Reported time for a task is still CPU time, too. I note the host's results for both AP and MB work show fairly frequent restarts, and nearly half are at the same progress point indicating a new checkpoint hadn't been reached. If you don't actually need all 2 GB of memory for other activities, setting the "Leave applications in memory while suspended?" preference to Yes would improve efficiency and maybe help the estimates problem. Joe |
Dorsai Send message Joined: 7 Sep 04 Posts: 474 Credit: 4,504,838 RAC: 0 |
... Will try that. Ty. :) (you never know what you are doing is screwing it, until you're told "Stop screwing it before I screw you!") Foamy is "Lord and Master". (Oh, + some Classic WUs too.) |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.