Message boards :
Number crunching :
Longer MB tasks are here
Message board moderation
Author | Message |
---|---|
samuel7 Send message Joined: 2 Jan 00 Posts: 47 Credit: 2,194,240 RAC: 0 |
The change is in. Downloaded this evening (UTC+3): <chirp_resolution>0.1665 Should mean double run time. Deadlines for other than shorties are in September. For those who want to know, a VLAR has <rsc_fpops_est>160720000000000.000000 and a VHAR <rsc_fpops_est>47560000000000.000000 |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
I also just noticed some fresh downloads in my list that are expected to take about twice as long as the typical 0.44 AR tasks that estimate ~2 hours. The new tasks are estimating a little over 4 hours. Not complaining in the least bit. I believe this is a good way to "slow down" the fast hosts without stepping on any toes. Longer crunch time = less server pounding, and to make it better, longer crunch time comes in the same size package as before (~367 KiB). Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
BTW. The Credits will be the double also ? ;-) And.. what about the ARs? Because of the identification of the CUDA_VLARkill_app - the VLARs are the same? Changed/other/new ARs ? |
.clair. Send message Joined: 4 Nov 04 Posts: 1300 Credit: 55,390,408 RAC: 69 |
Looks like i have just got some of them, completion times jump from 1:08 to 2:17 well i canot get long AP for this Linux q6600 so these will do. |
JohnDK Send message Joined: 28 May 00 Posts: 1222 Credit: 451,243,443 RAC: 1,127 |
Very good question, double work = double prize or = half prize. |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
If there are extra optimizations in the code, it may be less than double credit as it is the FLOP count that is counted for credit. BOINC WIKI |
Vistro Send message Joined: 6 Aug 08 Posts: 233 Credit: 316,549 RAC: 0 |
I always thought credits were directly tied to how many calculations your CPU did. So a longer work unit requires more calculations gives you more credit. |
Pappa Send message Joined: 9 Jan 00 Posts: 2562 Credit: 12,301,681 RAC: 0 |
Just so everyone knows this is True. The Enhanced Workunits have Arrived. The Enhanced Workunits have been tested in Seti Beta, there were no outward ill effects identified. In a conversation with Eric, he confirmed it. He also comfirmed that as the New workunits require twice FLOPS that more credit will be granted. That is still subject to the normalization script running. For that one we wait. Currently, anyone running an Optimized Application will continue to work and "should" cause no ill effects (errors). If you have a larger number of workunit errors report here in Number Crunching. Regards Please consider a Donation to the Seti Project. |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
... It would be well if the 'PC task list overview' would be again available for this.. ;-) 'Error' and 'result validation' overview. |
Pappa Send message Joined: 9 Jan 00 Posts: 2562 Credit: 12,301,681 RAC: 0 |
Patience... So during this recovery, one would hope that no one would do anything prematurely. Last I looked the Replica was still a bit behind the master as things are actually uploading and downloading. I also can not see my tasks. When it is deemed appropiate to turn them back on they will appear! Regards ... Please consider a Donation to the Seti Project. |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
I always thought credits were directly tied to how many calculations your CPU did. So a longer work unit requires more calculations gives you more credit. Yes. However, there can be two things happening that somewhat pull against each other. Extra depth causes more calculations, and better enhancements cause fewer calculations. BOINC WIKI |
zpm Send message Joined: 25 Apr 08 Posts: 284 Credit: 1,659,024 RAC: 0 |
it's one of those, 2 steps forward; 1 step back... |
Pappa Send message Joined: 9 Jan 00 Posts: 2562 Credit: 12,301,681 RAC: 0 |
No actually it is a step Forward. It reduces the load without making everyone give up Optimized Apps and does more Science that is Backwards compatible. It has just been tough getting there. it's one of those, 2 steps forward; 1 step back... Please consider a Donation to the Seti Project. |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
I always thought credits were directly tied to how many calculations your CPU did. So a longer work unit requires more calculations gives you more credit. There is no code change, simply a header parameter adjustment. The estimates and deadlines have doubled, calculations very nearly doubled. Initialization code doesn't need to double, so particularly for CUDA elapsed times will not quite double. There are also operations which were done only 1 time at zero chirp for some angle ranges which will still be only done once, and at other angle ranges will increase from 1 to 3. Overall, expect average run times about 1.95 times the old value for the same angle range. But since that's less than the doubling of the estimate the server-side credit adjustment will correct downwards slightly, maybe too little to really notice. Joe |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
I've got several of these 9/7 deadline "double size" WUs. On CUDA, they execute dropping 15-20sec of "To Completion" per second (sounds about right). But on the CPU app, I have 3 running on one of my machines that have run for about 40min. or so CPU time, and they are barely dropping at all (and erratically so). ("To Completion" time is maybe 5-10 min. less over that time). They are < 10% complete, which would argue completion times around 8 hours (??). Is this a bug? Feature? (I'm using the optimized apps). |
Jord Send message Joined: 9 Jun 99 Posts: 15184 Credit: 4,362,181 RAC: 3 |
No actually it is a step Forward. It reduces the load without making everyone give up Optimized Apps and does more Science that is Backwards compatible. I hope you're correct on that. How about those that use the VLAR killer for their GPUs? If all these are classed as VLAR, they'll be continuously killing them and downloading more work; no letup in the (down)load then. |
samuel7 Send message Joined: 2 Jan 00 Posts: 47 Credit: 2,194,240 RAC: 0 |
No actually it is a step Forward. It reduces the load without making everyone give up Optimized Apps and does more Science that is Backwards compatible. Raistmer can give a definitive answer, but I think his app looks at the angle range in the WU header and is "immune" to this change. Marius' rescheduling tool does the same (it is based on Raistmer's perl script). How the change affects the crunching of a VLAR on a GPU, I don't know. |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
I've got several of these 9/7 deadline "double size" WUs. On CUDA, they execute dropping 15-20sec of "To Completion" per second (sounds about right). But on the CPU app, I have 3 running on one of my machines that have run for about 40min. or so CPU time, and they are barely dropping at all (and erratically so). I take it back - I guess these just take a (long) while to start up - they seem to be settling down "normally" to a final CPU time in the area of the original 5 or so hours. It is now 3.5 hours or so into execution, and they all have 1-1.5 hours "To Completion". My bad! And I'm glad. |
Zen Send message Joined: 25 May 99 Posts: 9 Credit: 3,659,629 RAC: 0 |
I don't know if anyone else has experienced a problem with the new work units or not, but I have. I got a short unit with the other longer MB files I downloaded this morning. It was about 25% completed when I shut BOINC down temporarily. When I restarted BOINC the task started again, but from 0 percent complete. From my perspective this is a major flaw with the new work units. I stop and restart BOINC on all of my computers from time to time, not to mention power interruptions and restarting the computer itself. If I'm going to lose work in progress each time, it becomes counter productive. Last night during a storm I lost power to my computers five different times. If I had been running the longer work units and they zeroed out when stopped, I would have lost 20 or more hours of actual computing time. In the past when stopping BOINC or my computer I have lost a few seconds of computing time on work units in progress. I don't mind running longer work units, I do mind losing work in progress. |
Pappa Send message Joined: 9 Jan 00 Posts: 2562 Credit: 12,301,681 RAC: 0 |
I've got several of these 9/7 deadline "double size" WUs. On CUDA, they execute dropping 15-20sec of "To Completion" per second (sounds about right). But on the CPU app, I have 3 running on one of my machines that have run for about 40min. or so CPU time, and they are barely dropping at all (and erratically so). Two things, first is DCF now has to adjust slightly to the new longer workunits. That will take at lest 20 completed workunits. Then Reporting of time estimates will be "better." Currently if you have a mix it will be confused. Second for Optimized Apps it has been noted that Lunatics has released a Unitfied Installer which has the latest Optimized Appa which was idenified here Lunatics Unified Installer for Windows v0.2 Please consider a Donation to the Seti Project. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.