Message boards :
Number crunching :
Umm, how about LONGER TASKS????
Message board moderation
Author | Message |
---|---|
Tex1954 Send message Joined: 16 Mar 11 Posts: 12 Credit: 6,654,193 RAC: 17 |
It takes 6 minutes to crunch a SETI task on my beater box 9800 GT system. It only takes a couple minutes to crunch on my main systems. Since bandwidth seems to be a major problem, how about making the tasks 10 times longer? Einstein is over 4Meg DL size, small upload size and almost never has problems. So, I think making the tasks 10 times longer/larger would help all around... Might snag some slower speed internet users, so then you could make it an option, short or long tasks. Just brainstorming... :) PS: I'm thinking one of the major hangups with slow/intermittent downloading is simply the 2xPerHost small files and all the multiple file names and connections bogging things down. 1/10 the current file name tracking would speed up indexing and disk I/O a lot IMHO. I read about the 12 second scan thing and wonder why we can't do like Einstein does... send much larger and therefore fewer (per unit time) tasks. In fact, seems to me, Einstein sends multiple processing blocks per task, like eight 4-Meg files or so for each single WU. Just can't help wondering if the DL snags are more disk I/O seek time related or not... |
janneseti Send message Joined: 14 Oct 09 Posts: 14106 Credit: 655,366 RAC: 0 |
It takes 6 minutes to crunch a SETI task on my beater box 9800 GT system. It only takes a couple minutes to crunch on my main systems. There is already a thread opened for this matter. http://setiathome.berkeley.edu/forum_thread.php?id=65700 To me there are 2 possible way to decrease the network load at SETI. 1. Compressing WU files to crunchers. (less data to send) 2. Make WU's contain more data. (less overhead when fewer WU's) |
Tex1954 Send message Joined: 16 Mar 11 Posts: 12 Credit: 6,654,193 RAC: 17 |
I agree 100% and posted the other thread as well. I'm all for making the tasks 10 times longer/larger. :) |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
Well we already doubled the length of MB WUs. I think that was in '09. It was shortly after GPUs started being used. It was an effort to basically make half as many WUs in progress at any given time and hopefully have clients asking for work less often. More data was not put into the WUs, but the "resolution" at which the data is analyzed got doubled, which made twice the amount of work out of the same ~367kb WU. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Mike Send message Joined: 17 Feb 01 Posts: 34258 Credit: 79,922,639 RAC: 80 |
Well we already doubled the length of MB WUs. I think that was in '09. It was shortly after GPUs started being used. It was an effort to basically make half as many WUs in progress at any given time and hopefully have clients asking for work less often. Correct. And with V7 comming times will increase again. With each crime and every kindness we birth our future. |
skildude Send message Joined: 4 Oct 00 Posts: 9541 Credit: 50,759,529 RAC: 60 |
you'd want VHAR Wu's made longer not all WU's VLAR WU's already take up to a few hours to run even on a GPU. VHAR WU's could be easily identified and extended I would thing In a rich man's house there is no place to spit but his face. Diogenes Of Sinope |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
Longer tasks are already here, known as Astropulse. ;^) When applications to do the GBT data can be built and tested, there's a good chance the S@h style processing will be tasks of ~450 second duration rather than 107.37 seconds simply because that was the planned duration of the targeted observations. Those will also be VLAR, that's what a targeted observation produces. So think in terms of WUs which are ~4.2X the size of an Arecibo Multibeam WU and take at least proportionally longer to crunch (or maybe 10X or more). The later scanning across the whole Kepler field would IMO probably be done with the same larger WUs for consistency. Doing Astropulse style processing on the GBT data might take months and involve huge WUs, dedispersion is most meaningful done over the largest available bandwidth and the GBT recordings have at least 232 times more bandwidth than what the multibeam recorder at Arecibo is capturing. The much higher sample rate also allows extremely fine increments of dedispersion, conceivably the processing could take 232*232 times as long. For the Arecibo multibeam work, I doubt the project will dump much larger WUs on those who have been with the project from the beginning and don't have the latest technology. The comments in the other thread about projects which have arranged to pack multiple WUs in one transfer perhaps point the way to adapt to the ~10000:1 ratio of crunching capability here. If the top hosts could get such packs rather than single WUs it could improve various aspects of the situation, though details could make it impractical here. Joe |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Thanks for the insights, Joe. The prospect of the project processing GBT data with so much science packed into it is rather exciting. And the project has the processing power out here to handle it. And the Lunatics folks to optimize it. And the hard working, though few, Seti staff to find a way to make it happen. Just imagine....millions of credits per WU!! LOL, just kidding. All BS aside, I think great things are in the offing for the Seti search at Berkeley. And if I have anything to say about it, the kitty crunching crew will be here participating in whatever capacity we can. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
garfield Send message Joined: 4 Jan 02 Posts: 45 Credit: 7,409,265 RAC: 65 |
Longer tasks are already here, known as Astropulse. ;^) The 10000:1 crunching capability is a problem for all projects. Collatz has implemented a selection between 'Collatz' and 'Mini Collatz' in the personal settings. Maybe it's helpful to keep that in mind when making a decision for the new concept. |
dskagcommunity Send message Joined: 24 Feb 11 Posts: 43 Credit: 2,901,049 RAC: 0 |
It takes 6 minutes to crunch a SETI task on my beater box 9800 GT system. It only takes a couple minutes to crunch on my main systems. Do you get only shortys? I get Seti WUs from 7 - 45 Minutes Duration on my 9800GTX machines. So there are often enough bigger ones ^^ |
Vipin Palazhi Send message Joined: 29 Feb 08 Posts: 286 Credit: 167,386,578 RAC: 0 |
I understand that the work units have been make bigger, but I would like them to be even bigger. I am not talking about AP units, just the regular MBs. When I first started crunching seti, it was fun to watch the % done on the work units and I used to feel pleased that my system has finally done the big task. Now the GPUs tear through them so fast that they come and go in a blink. Wouldn't it be feasible to make the GPU units 10 or even 50 times larger? And of late, there seems to be a dearth of GPU tasks as my card has been mostly idling for the past couple of days. ______________ |
Tom95134 Send message Joined: 27 Nov 01 Posts: 216 Credit: 3,790,200 RAC: 0 |
Frankly, I think the WU size is just fine. I am running a Core2 Quad and the SETI GPU tasks take anything from about 12 min to just under 1 hour. I have never seen any that run more than 1 hour. I kind of like to see some progress through the listing of tasks instead of just setting there crunching on a single task. I have just started running Einstein@Home again and their GPU tasks run about 1.5 hours now. They use to run long GPU tasks which, at that time, would result in SETI tasks expiring. SETI CPU tasks run a lot longer. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.