Umm, how about LONGER TASKS????

Message boards : Number crunching : Umm, how about LONGER TASKS????
Message board moderation

To post messages, you must log in.

AuthorMessage
Tex1954
Volunteer tester

Send message
Joined: 16 Mar 11
Posts: 12
Credit: 6,654,193
RAC: 17
United States
Message 1160350 - Posted: 8 Oct 2011, 21:23:43 UTC
Last modified: 8 Oct 2011, 21:49:56 UTC

It takes 6 minutes to crunch a SETI task on my beater box 9800 GT system. It only takes a couple minutes to crunch on my main systems.

Since bandwidth seems to be a major problem, how about making the tasks 10 times longer? Einstein is over 4Meg DL size, small upload size and almost never has problems.

So, I think making the tasks 10 times longer/larger would help all around... Might snag some slower speed internet users, so then you could make it an option, short or long tasks.

Just brainstorming...

:)


PS: I'm thinking one of the major hangups with slow/intermittent downloading is simply the 2xPerHost small files and all the multiple file names and connections bogging things down. 1/10 the current file name tracking would speed up indexing and disk I/O a lot IMHO. I read about the 12 second scan thing and wonder why we can't do like Einstein does... send much larger and therefore fewer (per unit time) tasks. In fact, seems to me, Einstein sends multiple processing blocks per task, like eight 4-Meg files or so for each single WU. Just can't help wondering if the DL snags are more disk I/O seek time related or not...
ID: 1160350 · Report as offensive
Profile janneseti
Avatar

Send message
Joined: 14 Oct 09
Posts: 14106
Credit: 655,366
RAC: 0
Sweden
Message 1160397 - Posted: 8 Oct 2011, 22:45:28 UTC - in response to Message 1160350.  

It takes 6 minutes to crunch a SETI task on my beater box 9800 GT system. It only takes a couple minutes to crunch on my main systems.

Since bandwidth seems to be a major problem, how about making the tasks 10 times longer? Einstein is over 4Meg DL size, small upload size and almost never has problems.

So, I think making the tasks 10 times longer/larger would help all around... Might snag some slower speed internet users, so then you could make it an option, short or long tasks.

Just brainstorming...

:)



There is already a thread opened for this matter.
http://setiathome.berkeley.edu/forum_thread.php?id=65700

To me there are 2 possible way to decrease the network load at SETI.
1. Compressing WU files to crunchers. (less data to send)
2. Make WU's contain more data. (less overhead when fewer WU's)
ID: 1160397 · Report as offensive
Tex1954
Volunteer tester

Send message
Joined: 16 Mar 11
Posts: 12
Credit: 6,654,193
RAC: 17
United States
Message 1160405 - Posted: 8 Oct 2011, 23:10:15 UTC - in response to Message 1160397.  

I agree 100% and posted the other thread as well.

I'm all for making the tasks 10 times longer/larger.

:)
ID: 1160405 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1160422 - Posted: 8 Oct 2011, 23:38:26 UTC

Well we already doubled the length of MB WUs. I think that was in '09. It was shortly after GPUs started being used. It was an effort to basically make half as many WUs in progress at any given time and hopefully have clients asking for work less often.

More data was not put into the WUs, but the "resolution" at which the data is analyzed got doubled, which made twice the amount of work out of the same ~367kb WU.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1160422 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34258
Credit: 79,922,639
RAC: 80
Germany
Message 1160423 - Posted: 8 Oct 2011, 23:40:26 UTC - in response to Message 1160422.  
Last modified: 8 Oct 2011, 23:43:04 UTC

Well we already doubled the length of MB WUs. I think that was in '09. It was shortly after GPUs started being used. It was an effort to basically make half as many WUs in progress at any given time and hopefully have clients asking for work less often.

More data was not put into the WUs, but the "resolution" at which the data is analyzed got doubled, which made twice the amount of work out of the same ~367kb WU.


Correct.

And with V7 comming times will increase again.


With each crime and every kindness we birth our future.
ID: 1160423 · Report as offensive
Profile skildude
Avatar

Send message
Joined: 4 Oct 00
Posts: 9541
Credit: 50,759,529
RAC: 60
Yemen
Message 1160445 - Posted: 9 Oct 2011, 1:35:27 UTC - in response to Message 1160423.  

you'd want VHAR Wu's made longer not all WU's VLAR WU's already take up to a few hours to run even on a GPU. VHAR WU's could be easily identified and extended I would thing


In a rich man's house there is no place to spit but his face.
Diogenes Of Sinope
ID: 1160445 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 1160484 - Posted: 9 Oct 2011, 5:05:21 UTC

Longer tasks are already here, known as Astropulse. ;^)

When applications to do the GBT data can be built and tested, there's a good chance the S@h style processing will be tasks of ~450 second duration rather than 107.37 seconds simply because that was the planned duration of the targeted observations. Those will also be VLAR, that's what a targeted observation produces. So think in terms of WUs which are ~4.2X the size of an Arecibo Multibeam WU and take at least proportionally longer to crunch (or maybe 10X or more). The later scanning across the whole Kepler field would IMO probably be done with the same larger WUs for consistency.

Doing Astropulse style processing on the GBT data might take months and involve huge WUs, dedispersion is most meaningful done over the largest available bandwidth and the GBT recordings have at least 232 times more bandwidth than what the multibeam recorder at Arecibo is capturing. The much higher sample rate also allows extremely fine increments of dedispersion, conceivably the processing could take 232*232 times as long.

For the Arecibo multibeam work, I doubt the project will dump much larger WUs on those who have been with the project from the beginning and don't have the latest technology. The comments in the other thread about projects which have arranged to pack multiple WUs in one transfer perhaps point the way to adapt to the ~10000:1 ratio of crunching capability here. If the top hosts could get such packs rather than single WUs it could improve various aspects of the situation, though details could make it impractical here.
                                                                  Joe
ID: 1160484 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1160504 - Posted: 9 Oct 2011, 7:35:39 UTC
Last modified: 9 Oct 2011, 7:36:11 UTC

Thanks for the insights, Joe.
The prospect of the project processing GBT data with so much science packed into it is rather exciting. And the project has the processing power out here to handle it. And the Lunatics folks to optimize it. And the hard working, though few, Seti staff to find a way to make it happen.

Just imagine....millions of credits per WU!! LOL, just kidding.

All BS aside, I think great things are in the offing for the Seti search at Berkeley. And if I have anything to say about it, the kitty crunching crew will be here participating in whatever capacity we can.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1160504 · Report as offensive
garfield
Volunteer tester

Send message
Joined: 4 Jan 02
Posts: 45
Credit: 7,409,265
RAC: 65
Austria
Message 1160511 - Posted: 9 Oct 2011, 8:12:02 UTC - in response to Message 1160484.  

Longer tasks are already here, known as Astropulse. ;^)

For the Arecibo multibeam work, I doubt the project will dump much larger WUs on those who have been with the project from the beginning and don't have the latest technology. The comments in the other thread about projects which have arranged to pack multiple WUs in one transfer perhaps point the way to adapt to the ~10000:1 ratio of crunching capability here. If the top hosts could get such packs rather than single WUs it could improve various aspects of the situation, though details could make it impractical here.
                                                                  Joe


The 10000:1 crunching capability is a problem for all projects.
Collatz has implemented a selection between 'Collatz' and 'Mini Collatz' in the personal settings.
Maybe it's helpful to keep that in mind when making a decision for the new concept.
ID: 1160511 · Report as offensive
Profile dskagcommunity
Volunteer tester
Avatar

Send message
Joined: 24 Feb 11
Posts: 43
Credit: 2,901,049
RAC: 0
Austria
Message 1160535 - Posted: 9 Oct 2011, 10:10:06 UTC - in response to Message 1160350.  

It takes 6 minutes to crunch a SETI task on my beater box 9800 GT system. It only takes a couple minutes to crunch on my main systems.



Do you get only shortys? I get Seti WUs from 7 - 45 Minutes Duration on my 9800GTX machines. So there are often enough bigger ones ^^
ID: 1160535 · Report as offensive
Profile Vipin Palazhi
Avatar

Send message
Joined: 29 Feb 08
Posts: 286
Credit: 167,386,578
RAC: 0
India
Message 1160708 - Posted: 9 Oct 2011, 20:06:38 UTC

I understand that the work units have been make bigger, but I would like them to be even bigger. I am not talking about AP units, just the regular MBs. When I first started crunching seti, it was fun to watch the % done on the work units and I used to feel pleased that my system has finally done the big task. Now the GPUs tear through them so fast that they come and go in a blink. Wouldn't it be feasible to make the GPU units 10 or even 50 times larger?

And of late, there seems to be a dearth of GPU tasks as my card has been mostly idling for the past couple of days.
______________

ID: 1160708 · Report as offensive
Tom95134

Send message
Joined: 27 Nov 01
Posts: 216
Credit: 3,790,200
RAC: 0
United States
Message 1161341 - Posted: 12 Oct 2011, 3:15:33 UTC

Frankly, I think the WU size is just fine. I am running a Core2 Quad and the SETI GPU tasks take anything from about 12 min to just under 1 hour. I have never seen any that run more than 1 hour. I kind of like to see some progress through the listing of tasks instead of just setting there crunching on a single task.

I have just started running Einstein@Home again and their GPU tasks run about 1.5 hours now. They use to run long GPU tasks which, at that time, would result in SETI tasks expiring.

SETI CPU tasks run a lot longer.
ID: 1161341 · Report as offensive

Message boards : Number crunching : Umm, how about LONGER TASKS????


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.