Longer MB tasks are here


log in

Advanced search

Message boards : Number crunching : Longer MB tasks are here

1 · 2 · 3 · 4 · Next
Author Message
samuel7
Volunteer tester
Send message
Joined: 2 Jan 00
Posts: 47
Credit: 2,194,240
RAC: 0
Finland
Message 920442 - Posted: 22 Jul 2009, 21:15:49 UTC

The change is in. Downloaded this evening (UTC+3):

<chirp_resolution>0.1665

Should mean double run time. Deadlines for other than shorties are in September.

For those who want to know, a VLAR has
<rsc_fpops_est>160720000000000.000000

and a VHAR
<rsc_fpops_est>47560000000000.000000

____________

Cosmic_Ocean
Avatar
Send message
Joined: 23 Dec 00
Posts: 2206
Credit: 8,042,605
RAC: 4,330
United States
Message 920447 - Posted: 22 Jul 2009, 21:27:52 UTC

I also just noticed some fresh downloads in my list that are expected to take about twice as long as the typical 0.44 AR tasks that estimate ~2 hours. The new tasks are estimating a little over 4 hours.

Not complaining in the least bit. I believe this is a good way to "slow down" the fast hosts without stepping on any toes. Longer crunch time = less server pounding, and to make it better, longer crunch time comes in the same size package as before (~367 KiB).
____________

Linux laptop uptime: 1484d 22h 42m
Ended due to UPS failure, found 14 hours after the fact

Profile [seti.international] Dirk Sadowski
Volunteer tester
Avatar
Send message
Joined: 6 Apr 07
Posts: 6974
Credit: 57,226,802
RAC: 22,385
Germany
Message 920455 - Posted: 22 Jul 2009, 21:43:35 UTC


BTW.

The Credits will be the double also ? ;-)


And.. what about the ARs?
Because of the identification of the CUDA_VLARkill_app - the VLARs are the same?
Changed/other/new ARs ?

____________
BR



>Das Deutsche Cafe. The German Cafe.<

clive G1FYE
Volunteer moderator
Send message
Joined: 4 Nov 04
Posts: 1300
Credit: 23,054,144
RAC: 2
United Kingdom
Message 920480 - Posted: 22 Jul 2009, 22:53:36 UTC

Looks like i have just got some of them,
completion times jump from 1:08 to 2:17
well i canot get long AP for this Linux q6600 so these will do.

JohnDK
Volunteer tester
Avatar
Send message
Joined: 28 May 00
Posts: 823
Credit: 34,021,949
RAC: 73,218
Denmark
Message 920491 - Posted: 22 Jul 2009, 23:45:55 UTC - in response to Message 920455.
Last modified: 22 Jul 2009, 23:46:23 UTC


The Credits will be the double also ? ;-)

Very good question, double work = double prize or = half prize.

John McLeod VII
Volunteer developer
Volunteer tester
Avatar
Send message
Joined: 15 Jul 99
Posts: 23702
Credit: 499,579
RAC: 570
United States
Message 920493 - Posted: 22 Jul 2009, 23:49:42 UTC - in response to Message 920491.


The Credits will be the double also ? ;-)

Very good question, double work = double prize or = half prize.

If there are extra optimizations in the code, it may be less than double credit as it is the FLOP count that is counted for credit.
____________


BOINC WIKI

Profile Vistro
Avatar
Send message
Joined: 6 Aug 08
Posts: 233
Credit: 316,549
RAC: 0
United States
Message 920495 - Posted: 22 Jul 2009, 23:50:35 UTC

I always thought credits were directly tied to how many calculations your CPU did. So a longer work unit requires more calculations gives you more credit.

Profile Pappa
Volunteer tester
Avatar
Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 920501 - Posted: 23 Jul 2009, 0:19:05 UTC
Last modified: 23 Jul 2009, 0:19:31 UTC

Just so everyone knows this is True. The Enhanced Workunits have Arrived.

The Enhanced Workunits have been tested in Seti Beta, there were no outward ill effects identified.

In a conversation with Eric, he confirmed it. He also comfirmed that as the New workunits require twice FLOPS that more credit will be granted. That is still subject to the normalization script running. For that one we wait.

Currently, anyone running an Optimized Application will continue to work and "should" cause no ill effects (errors). If you have a larger number of workunit errors report here in Number Crunching.

Regards
____________
Please consider a Donation to the Seti Project.

Profile [seti.international] Dirk Sadowski
Volunteer tester
Avatar
Send message
Joined: 6 Apr 07
Posts: 6974
Credit: 57,226,802
RAC: 22,385
Germany
Message 920511 - Posted: 23 Jul 2009, 0:48:24 UTC - in response to Message 920501.
Last modified: 23 Jul 2009, 0:52:16 UTC

...
Currently, anyone running an Optimized Application will continue to work and "should" cause no ill effects (errors). If you have a larger number of workunit errors report here in Number Crunching.
...


It would be well if the 'PC task list overview' would be again available for this.. ;-)

'Error' and 'result validation' overview.

____________
BR



>Das Deutsche Cafe. The German Cafe.<

Profile Pappa
Volunteer tester
Avatar
Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 920517 - Posted: 23 Jul 2009, 1:00:11 UTC - in response to Message 920511.

Patience...

So during this recovery, one would hope that no one would do anything prematurely.
Last I looked the Replica was still a bit behind the master as things are actually uploading and downloading.

I also can not see my tasks. When it is deemed appropiate to turn them back on they will appear!

Regards

...
Currently, anyone running an Optimized Application will continue to work and "should" cause no ill effects (errors). If you have a larger number of workunit errors report here in Number Crunching.
...


It would be well if the 'PC task list overview' would be again available for this.. ;-)

'Error' and 'result validation' overview.


____________
Please consider a Donation to the Seti Project.

John McLeod VII
Volunteer developer
Volunteer tester
Avatar
Send message
Joined: 15 Jul 99
Posts: 23702
Credit: 499,579
RAC: 570
United States
Message 920519 - Posted: 23 Jul 2009, 1:05:56 UTC - in response to Message 920495.

I always thought credits were directly tied to how many calculations your CPU did. So a longer work unit requires more calculations gives you more credit.

Yes. However, there can be two things happening that somewhat pull against each other. Extra depth causes more calculations, and better enhancements cause fewer calculations.
____________


BOINC WIKI

zpm
Volunteer tester
Avatar
Send message
Joined: 25 Apr 08
Posts: 284
Credit: 1,187,904
RAC: 3,714
United States
Message 920531 - Posted: 23 Jul 2009, 2:18:19 UTC - in response to Message 920519.

it's one of those, 2 steps forward; 1 step back...

Profile Pappa
Volunteer tester
Avatar
Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 920536 - Posted: 23 Jul 2009, 2:27:00 UTC - in response to Message 920531.
Last modified: 23 Jul 2009, 2:27:23 UTC

No actually it is a step Forward. It reduces the load without making everyone give up Optimized Apps and does more Science that is Backwards compatible.

It has just been tough getting there.


it's one of those, 2 steps forward; 1 step back...

____________
Please consider a Donation to the Seti Project.

Josef W. Segur
Volunteer developer
Volunteer tester
Send message
Joined: 30 Oct 99
Posts: 4143
Credit: 1,005,918
RAC: 273
United States
Message 920548 - Posted: 23 Jul 2009, 3:14:04 UTC - in response to Message 920519.

I always thought credits were directly tied to how many calculations your CPU did. So a longer work unit requires more calculations gives you more credit.

Yes. However, there can be two things happening that somewhat pull against each other. Extra depth causes more calculations, and better enhancements cause fewer calculations.

There is no code change, simply a header parameter adjustment.

The estimates and deadlines have doubled, calculations very nearly doubled. Initialization code doesn't need to double, so particularly for CUDA elapsed times will not quite double. There are also operations which were done only 1 time at zero chirp for some angle ranges which will still be only done once, and at other angle ranges will increase from 1 to 3.

Overall, expect average run times about 1.95 times the old value for the same angle range. But since that's less than the doubling of the estimate the server-side credit adjustment will correct downwards slightly, maybe too little to really notice.
Joe

jravin
Send message
Joined: 25 Mar 02
Posts: 910
Credit: 86,667,396
RAC: 88,367
United States
Message 920603 - Posted: 23 Jul 2009, 7:52:39 UTC

I've got several of these 9/7 deadline "double size" WUs. On CUDA, they execute dropping 15-20sec of "To Completion" per second (sounds about right). But on the CPU app, I have 3 running on one of my machines that have run for about 40min. or so CPU time, and they are barely dropping at all (and erratically so).
("To Completion" time is maybe 5-10 min. less over that time). They are < 10% complete, which would argue completion times around 8 hours (??).
Is this a bug? Feature? (I'm using the optimized apps).
____________

Profile Ageless
Avatar
Send message
Joined: 9 Jun 99
Posts: 12131
Credit: 2,525,065
RAC: 551
Netherlands
Message 920607 - Posted: 23 Jul 2009, 8:09:18 UTC - in response to Message 920536.

No actually it is a step Forward. It reduces the load without making everyone give up Optimized Apps and does more Science that is Backwards compatible.

I hope you're correct on that. How about those that use the VLAR killer for their GPUs? If all these are classed as VLAR, they'll be continuously killing them and downloading more work; no letup in the (down)load then.
____________
Jord

Loving awareness is free.

samuel7
Volunteer tester
Send message
Joined: 2 Jan 00
Posts: 47
Credit: 2,194,240
RAC: 0
Finland
Message 920616 - Posted: 23 Jul 2009, 9:39:40 UTC - in response to Message 920607.

No actually it is a step Forward. It reduces the load without making everyone give up Optimized Apps and does more Science that is Backwards compatible.

I hope you're correct on that. How about those that use the VLAR killer for their GPUs? If all these are classed as VLAR, they'll be continuously killing them and downloading more work; no letup in the (down)load then.


Raistmer can give a definitive answer, but I think his app looks at the angle range in the WU header and is "immune" to this change.

Marius' rescheduling tool does the same (it is based on Raistmer's perl script).

How the change affects the crunching of a VLAR on a GPU, I don't know.
____________

jravin
Send message
Joined: 25 Mar 02
Posts: 910
Credit: 86,667,396
RAC: 88,367
United States
Message 920622 - Posted: 23 Jul 2009, 10:36:54 UTC - in response to Message 920603.

I've got several of these 9/7 deadline "double size" WUs. On CUDA, they execute dropping 15-20sec of "To Completion" per second (sounds about right). But on the CPU app, I have 3 running on one of my machines that have run for about 40min. or so CPU time, and they are barely dropping at all (and erratically so).
("To Completion" time is maybe 5-10 min. less over that time). They are < 10% complete, which would argue completion times around 8 hours (??).
Is this a bug? Feature? (I'm using the optimized apps).


I take it back - I guess these just take a (long) while to start up - they seem to be settling down "normally" to a final CPU time in the area of the original 5 or so hours. It is now 3.5 hours or so into execution, and they all have 1-1.5 hours "To Completion".
My bad! And I'm glad.
____________

Zen
Send message
Joined: 25 May 99
Posts: 9
Credit: 3,363,732
RAC: 0
United States
Message 920660 - Posted: 23 Jul 2009, 13:33:03 UTC - in response to Message 920501.


Currently, anyone running an Optimized Application will continue to work and "should" cause no ill effects (errors). If you have a larger number of workunit errors report here in Number Crunching.

Regards


I don't know if anyone else has experienced a problem with the new work units or not, but I have. I got a short unit with the other longer MB files I downloaded this morning. It was about 25% completed when I shut BOINC down temporarily. When I restarted BOINC the task started again, but from 0 percent complete.

From my perspective this is a major flaw with the new work units. I stop and restart BOINC on all of my computers from time to time, not to mention power interruptions and restarting the computer itself. If I'm going to lose work in progress each time, it becomes counter productive. Last night during a storm I lost power to my computers five different times. If I had been running the longer work units and they zeroed out when stopped, I would have lost 20 or more hours of actual computing time.

In the past when stopping BOINC or my computer I have lost a few seconds of computing time on work units in progress. I don't mind running longer work units, I do mind losing work in progress.

____________

Profile Pappa
Volunteer tester
Avatar
Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 920674 - Posted: 23 Jul 2009, 14:34:43 UTC - in response to Message 920603.

I've got several of these 9/7 deadline "double size" WUs. On CUDA, they execute dropping 15-20sec of "To Completion" per second (sounds about right). But on the CPU app, I have 3 running on one of my machines that have run for about 40min. or so CPU time, and they are barely dropping at all (and erratically so).
("To Completion" time is maybe 5-10 min. less over that time). They are < 10% complete, which would argue completion times around 8 hours (??).
Is this a bug? Feature? (I'm using the optimized apps).


Two things, first is DCF now has to adjust slightly to the new longer workunits. That will take at lest 20 completed workunits. Then Reporting of time estimates will be "better." Currently if you have a mix it will be confused.

Second for Optimized Apps it has been noted that Lunatics has released a Unitfied Installer which has the latest Optimized Appa which was idenified here Lunatics Unified Installer for Windows v0.2



____________
Please consider a Donation to the Seti Project.

1 · 2 · 3 · 4 · Next

Message boards : Number crunching : Longer MB tasks are here

Copyright © 2014 University of California