Message boards :
Number crunching :
WASTING MY TIME?
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · Next
Author | Message |
---|---|
Blurf Send message Joined: 2 Sep 06 Posts: 8962 Credit: 12,678,685 RAC: 0 |
I take your point Gary, us long term hard liners are Ok, but there are those that want the "feelgood factor" which Matt managed to give them. Chris-there's the problem--you won't get enough input. The lab staff simply won't have the time to give you what you want and people like Richard can only give so much info. We really need to find a way to bring more staff to help with the workload. Our GPU fundraiser group needs to start finding fresh sources of new funds. We have a local billionaire named Thomas Golisano--thinking of trying to invoke some connections I have to him. |
bill Send message Joined: 16 Jun 99 Posts: 861 Credit: 29,352,955 RAC: 0 |
$25 million dollars spread out over the next 5 years would probably be adequate for enough salaried staff. |
Tron Send message Joined: 16 Aug 09 Posts: 180 Credit: 2,250,468 RAC: 0 |
The general idea would be to send 107.374 seconds of raw data from a channel to the host (a 64MiB + overhead super WU) and have the app split it into 256 frequency subbands (MB WUs) plus split into 8 sequential time sections at full bandwidth (AP WUs). The advantage is that single ~64MiB download replaces ~162MiB of downloads for work split server side. Both the raw data and split data use the same 2 bits to represent a complex data point, the savings come from only sending the data once plus far less overhead in the WU header information. Take that a step further and have the Host distribute some the "locally split" WU's |
Slavac Send message Joined: 27 Apr 11 Posts: 1932 Credit: 17,952,639 RAC: 0 |
Mike, if I had volunteers contributing to my efforts, I would treat them as a valuable resource and not ignore them and take them for granted. It is simply the courteous thing to do. While there's a few things the lab could do better (namely keeping the volunteers informed) here's something to keep in mind: Through our volunteers who donate their time and money to SETI through the GPUUG we've accomplished to date http://www.gpuug.org/content/what-we-do. Before we started the infrastructure of the lab was in a fairly poor condition. For evidence of this take a look at the first server donated, Synergy and now look at how many tasks that one machine is running. Add in Paddy and George and together these 3 servers have taken over the duties of I believe 8 now retired servers. Since then our donors have upgraded everything from the server closet (Synergy, GeorgeM, PaddyM, new switch, more RAM, a filled JBOD) to the lab itself (workstations, desktop setups, UPS's) to the basics (120 and counting transport drives plus protective cases). We're even upgrading how SETI collects the data you process with our compute nodes, Brocade switches, docks and so on. Heck our donors have even contributed a large fist full of cash. ______________________________ While I like to point to the above and rant and rave at how awesome our donors are and what they've done, the issue remains; if it doesn't show up on Jim Donor's computer in some visible way, it doesn't seem to matter. This issue is frustrating to me however it's completely understandable given that the scientific community is largely focused on tangible results. ______________________________ The issue we're facing is a bit understandable. Consider that we're the largest BOINC project currently running. We chew through immense amounts of data thanks to ever increasing technology Compare a 560ti to a 690 for example and realize that advancement represents about a year's time of development. As a result of the above, coupled with our need to upgrade infrastructure we run into problems like we've been having. ______________________________ For my end of the chain, we're going to continue to work through our donors to upgrade the project's infrastructure in hopes that we can avoid these issues we're having in the future. One of my primary goals is smoothing out the system while at the same time increasing the amount of data we're processing (yay more science!). In short, try to be understanding. We have X resources while Y (amount of data users can process per (time)) is ever increasing. There are several logjams in our way namely a lack of staff, lack of proper bandwidth and necessary infrastructure upgrades. We (GPUUG) are working on fixing all of the above but we need time, money and volunteers who want to lend us a hand. ------------------------------ Sorry for the very long winded response but I hope it gives a few folks something to think about in light of the issues we've been having. Executive Director GPU Users Group Inc. - brad@gpuug.org |
Slavac Send message Joined: 27 Apr 11 Posts: 1932 Credit: 17,952,639 RAC: 0 |
I take your point Gary, us long term hard liners are Ok, but there are those that want the "feelgood factor" which Matt managed to give them. To be fair we're essentially a two man operation. I've begged for volunteers to write grants or letters, make calls, and generally help us however we they can but so far I've gotten next to nothing in the form of volunteers. If I had two people who could spend maybe 4 hours a week contacting potential donors, I could do some good. One day I'll have those I hope. Executive Director GPU Users Group Inc. - brad@gpuug.org |
alan Send message Joined: 18 Feb 00 Posts: 131 Credit: 401,606 RAC: 0 |
Josef W. Segur wrote
Exactly. By this method you would be distributing more of the processing out to the end-user machines, which is what this project is all about. Should attract Dr. Anderson's attention. Nobody has addressed validation yet, though. Tron wrote
Why on Earth would you want to redistribute these WU? If the end-user machine is big and fast enough to take on a whole raw data unit, it's also big and fast enough to process the WU so generated. Indeed, the main saving is in reducing the bandwidth required at the central servers, by making only one download of the data and one upload of the combined set of results. The combined results would need to be validated as a unit against the same unit processed by a different end-user. Any disagreement would result in the whole unit being sent out again for a processing by a third user, exactly as is done at present. So you would reduce the load on the splitter processes and reduce the bandwidth requirement. |
Gary Charpentier Send message Joined: 25 Dec 00 Posts: 30651 Credit: 53,134,872 RAC: 32 |
All this splitting talk is forgetting one thing. It still would need to be split. A "tape" isn't just 107+ seconds of data. The "tape" may be several hours of data, and that data includes parameters that presently aren't being sent to us but that the splitters use in splitting the data. Aren't the "tapes" presently 2Tb hard drives? Do you want to wait for that to D/L before you can do the next work unit? Are you willing to have work units that make CPDN work units look small? |
Slavac Send message Joined: 27 Apr 11 Posts: 1932 Credit: 17,952,639 RAC: 0 |
All this splitting talk is forgetting one thing. It still would need to be split. A "tape" isn't just 107+ seconds of data. The "tape" may be several hours of data, and that data includes parameters that presently aren't being sent to us but that the splitters use in splitting the data. Aren't the "tapes" presently 2Tb hard drives? Do you want to wait for that to D/L before you can do the next work unit? Are you willing to have work units that make CPDN work units look small? 2TB for most everything (GBT/Arecibo), 3TB for AP reob and a few 1TB's tossed into the mix. Executive Director GPU Users Group Inc. - brad@gpuug.org |
Horacio Send message Joined: 14 Jan 00 Posts: 536 Credit: 75,967,266 RAC: 0 |
All this splitting talk is forgetting one thing. It still would need to be split. A "tape" isn't just 107+ seconds of data. The "tape" may be several hours of data, and that data includes parameters that presently aren't being sent to us but that the splitters use in splitting the data. Aren't the "tapes" presently 2Tb hard drives? Do you want to wait for that to D/L before you can do the next work unit? Are you willing to have work units that make CPDN work units look small? The idea, which might work or not, its not to avoid the splitting, but to avoid sending twice the "same" data once for AP and once for MB... Of course this is not a trivial change in the way things work, but the general idea is to modify the AP WU so we, on client side will be able, to extract the MB data and crunch all that data with both approaches... if this were possible then it will lessen the bandwith usage. I know that not every host/user has a broadband conection and that means that the project will need to send standard MB and APs along with this new WUs, but I think that those hosts are not the ones clogging the pipes, and all the "heavy" users will opt for this new kind of work which will keep them bussy for more time with less bandwith needed and also with less interaction with the scheduler... In theory, it sounds good...In practice I dont see them beeing able to implement this at the current level of resources and "bussy-ness"... But as this is not a short term project and if there are not technical reasons against the idea, they could plan it on the long term as the next step after the MBv7 or any other already planned step... And if there is something that make this definatelly not possible, it will be good to be known, so we can forget about it, and think another "wonderfull" idea to scorch with here! ;D |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
I tired to try to make SETI works, now even with or without Proxy+TCP+Hosts can´t even report the results... Will sail in others seas for a while, at least until the people from the lab makes something that realy finaly fix the problem. And all that the day i celebrate the 100MM mark, that´s not fair... |
Andre Howard Send message Joined: 16 May 99 Posts: 124 Credit: 217,463,217 RAC: 0 |
I tired to try to make SETI works, now even with or without Proxy+TCP+Hosts can´t even report the results... Same problem here with the proxies in the last hour or so. Congratulations on the 100mm, hope things will get straightened out soon so I can get there too. |
MusicGod Send message Joined: 7 Dec 02 Posts: 97 Credit: 24,782,870 RAC: 0 |
I`m with you Clyde, I started in 2002 and the project just seems to be plagued. I`ve been think some time about shutting down permanantly on my 10th anniversary.I can`t babysit my machines and I`m donating my time and energy ( in the form of electricity) to the project and everyhting seems to be getting worse.I`m not crying, I`m not complaining,,,,,,I am just saying!!!! With so many bad hosts out there it has casued a lot of problems in my belief of units sending bad results ...and then we get stuck with leftovers..I know trhese guys put a lot of their time into the project on their own, and I thank them for that, but it is getting to be too much on this end. |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
All this splitting talk is forgetting one thing. It still would need to be split. A "tape" isn't just 107+ seconds of data. The "tape" may be several hours of data, and that data includes parameters that presently aren't being sent to us but that the splitters use in splitting the data. Aren't the "tapes" presently 2Tb hard drives? Do you want to wait for that to D/L before you can do the next work unit? Are you willing to have work units that make CPDN work units look small? The "tape" files delivered to the splitters are generally 50.2 GB. and contain raw data for all channels over a period of about 1.5 hours. Yes, if this pie in the sky idea were ever implemented, a splitter process would have to extract 64MiB sections of raw data for one channel, do any radar blanking needed, and package the data with suitable header information. In effect that would be a minor change from what the ap_splitter processes now do for 8MiB data sections, so I don't see it as a significant hindrance to the idea. Joe |
Gary Charpentier Send message Joined: 25 Dec 00 Posts: 30651 Credit: 53,134,872 RAC: 32 |
I am a little bewildered. For the same spot in the sky to have a signal repeatedly. Next look at a given spot may be a couple years from the last look. Others, principally near interesting objects for radio astronomers, get looked at several times a year. |
Len Send message Joined: 15 Mar 10 Posts: 52 Credit: 11,725,173 RAC: 86 |
Sadly, these last two days, my PC has got no work at all from S@H. It has done a ton of work folding proteins and putting them away in the POEM draw, but SETI seems to be under the impression I don't want work. I read my last few requests in the log and was surprised I don't appear to be asking for work. So I tried suspending, then resumin work, but still it seems I am not wanting work! 18/11/2012 13:01:06 | SETI@home | work fetch suspended by user 18/11/2012 13:01:09 | SETI@home | work fetch resumed by user 18/11/2012 13:05:40 | SETI@home | update requested by user 18/11/2012 13:05:45 | SETI@home | Sending scheduler request: Requested by user. 18/11/2012 13:05:45 | SETI@home | Not reporting or requesting tasks 18/11/2012 13:06:09 | SETI@home | Scheduler request completed What has gone wrong? Anyone know? I don't think I'm wasting my time. The proteins needed folding anyway, but I have it set so it should only get a quarter of the work SETI does. Len I think I am. Therefore I am. I think. |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
The only "band-aid" solution for this days if try a Proxy+TCP optimizer, if works you will get your work (limited 100WU per GPU/CPU but is better than nothing), if no... you will stay with the proteins for a while. The only other thing we could do is pray for a quick return of Matt to the lab... |
Manuel Palacios Send message Joined: 2 Nov 99 Posts: 74 Credit: 30,209,980 RAC: 56 |
I think we all have to realize here that the collective performance capabilities of the computers attached to this project has exponentially increased while manpower and resources at SETI have been kept about the same. We went through a long outage and with difficulty getting tasks not to long ago historically speaking, and at this point it is just another hiccup. I have been with this project over 13 years now, and have no plans to abandon it because of my machines not getting enough WU's to crunch. Just leave your machines alone, it's not about the credits, it's about the science. Someone somewhere is crunching a WU that may or may not hold that elusive signal from ET, I will continue to support the science and offer my computers in search of that elusive signal. For now, I just attach my GPU to Einstein and keep it busy until more SETI WU's take its place. Have a great day. |
BigWaveSurfer Send message Joined: 29 Nov 01 Posts: 186 Credit: 36,311,381 RAC: 141 |
Just a reminder. Mike, I have to disagree with you. This project would NOT be here without the volunteers, that is the whole premise of this project. We are the employees who do not get paid (actually, we pay to be employed (computer equipment, upgrades, electricity, etc)). This is not a town with a fire dept and a volunteer fire department that would still continue to function if the volunteers all quit. If all the volunteers of seti quit the project would fall flat on its face. When a whole entire project relies 100% on its volunteers you would hope the 'project' would bend over backwards to make sure they are happy. I am not saying Seti has not, I am not a very active member posting/reading. I just download the program, check for updates, and let it run its magic on my machines. I can not imagine the amount of work a project like this would encompass but I am glad to lend my computers time. Just remember, even though we are 'volunteers', we are a LOT MORE than just 'volunteers', we are 'the project'.! |
bill Send message Joined: 16 Jun 99 Posts: 861 Credit: 29,352,955 RAC: 0 |
Do let me know when you have all the volunteers lined up and the date and time they're going to quit. I'd like to watch. I think you're chances of that occurring are greater than zero, but not by much. Think lots of decimal places. |
alan Send message Joined: 18 Feb 00 Posts: 131 Credit: 401,606 RAC: 0 |
http://setiathome.berkeley.edu/forum_thread.php?id=70080&postid=1307567 "We've just crossed a threshold where each host computes fast enough that host queues and the result table have become large enough to cause this problem." "For a more permanent fix, we plan do more work in each result by quadrupling the size of the workunits. But that fix will probably take months to implement and test. " I said that the workunits were too small - all the recent shorties have pushed it over the edge. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.