Message boards :
Number crunching :
The End of Distributive Computing as we know it
Message board moderation
Author | Message |
---|---|
Ace Casino Send message Joined: 5 Feb 03 Posts: 285 Credit: 29,750,804 RAC: 15 |
I believe this is the end to Distributed Computing, as we have known it for a decade or so. If you can complete work in seconds, rather than minutes or hours…why keep us around? The WU’s I’ve seen have completed in 4 - 7 seconds with CUDA. The wingman that have been paired with the CUDA have taken anywhere from 1,000+ seconds to 3,000+ seconds. SETI could take the $500,000 in donations per year, start buying NVIDIA cards, and do it there self a lot easier without the hassle from outside people. Other “potential†projects will surely opt for do-it-yourself now, rather than setup a Boinc Project. Even if they do decide to use distributed computing it may be limited to 1,000 or 20,000 users, or whatever that magic number might be. In the future Projects with only the most “Massive†amounts of data would use distributed computing. Even these Projects may limit the number of people due to the data transfer limitations, data creation, or other limitations they may have. The end of Distributive Computing as we know it may not happen overnight……but will likely happen in the next few years if NVIDIA or other types of drivers prove as, or more successful in crunching data. Computers from: home to Super, are getting faster and less expensive and are becoming easily affordable for almost anyone, especially a University or research group. There may always be a small niche for Distributed Computing in the future, but not on the scale it is now. Think of yourself as a Columbus or Magellan, you helped pave the way….but even their trips came to an end. |
Sirius B Send message Joined: 26 Dec 00 Posts: 24879 Credit: 3,081,182 RAC: 7 |
I believe this is the end to Distributed Computing, as we have known it for a decade or so. Yep, that old quote comes to mind - "All good things come to an end". However, I don't think that many universities/companies/researchers will waste their budget on machines to crunch their data when they can have a massive volunteer network doing the crunching free of charge. If they did it in house, think about their bills, with DC, they won't have that un-necessary outlay. |
tullio Send message Joined: 9 Apr 04 Posts: 8797 Credit: 2,930,782 RAC: 1 |
Somebody once said the world will need only 5 computers. Let's not repeat that mistake again. I just enrolled in AQUA, which is simulating quantum computers. That is the next evolution in computing power, not CUDA. Tullio |
Byron S Goodgame Send message Joined: 16 Jan 06 Posts: 1145 Credit: 3,936,993 RAC: 0 |
If you can complete work in seconds, rather than minutes or hours…why keep us around? The WU’s I’ve seen have completed in 4 - 7 seconds with CUDA. The wingman that have been paired with the CUDA have taken anywhere from 1,000+ seconds to 3,000+ seconds. The tasks aren't being done in seconds by people using CUDA (Edit well maybe some of the -9 tasks ). That just shows the cpu time put in. For example when I do an average task with CUDA it shows mine being done in 2 to 3 minutes, but they are taking actual time of 2 to 3 hours. Some are doing those tasks in minutes, but they are still quite a ways from being able to do them in seconds I think. |
Betting Slip Send message Joined: 25 Jul 00 Posts: 89 Credit: 716,008 RAC: 0 |
Somebody once said the world will need only 5 computers. Let's not repeat that mistake again. I just enrolled in AQUA, which is simulating quantum computers. That is the next evolution in computing power, not CUDA. It was the CEO of IBM no less. Computers get faster, problems get bigger, what they are asked to do becomes more and computers get faster and so it goes on and on. |
Grey Shadow Send message Joined: 26 Nov 08 Posts: 41 Credit: 139,654 RAC: 0 |
Don't worry. Modern scientific projects require enormous computing powers. So even 10x increase of computing speed achieved by CUDA and similar technologies won't feed their hunger. |
PhonAcq Send message Joined: 14 Apr 01 Posts: 1656 Credit: 30,658,217 RAC: 1 |
Cuda is one of those 'disruptive' technologies that Andy Grove boringly chants about. If the computing capacity increases, then seti (or any other project) can plan on doing a 'better' job. In effect, the wu's can become larger and/or longer now. Seti has done this at least once about a year ago, and the AP app is a second example. However, the distruption in available capacity appears gigantic, requiring a thoughtful response by seti. (Did I just hear someone say 'good luck'???) The real issue at seti's doorstep is how to keep people interested. I am not going to run out and buy a cuda card(s) just to stay in the project. If there is a real-justification for it, then I will. However, my guess is that the vast majority of users can't justify the upgrade. There is a reason that Intel is the largest video chip supplier (cheap, average video performance, easy motherboard integration). Also, if you are a green fairy, then it would really be profane to buy a hot, erg-hungry monster add-on board for your word processor- internet browser computer, just to look for ET. (That is, cuda amplifies the violation of the distributed computing thesis that DC makes use of unused cycles that otherwise go to waste.) Seti's resolution to this challenge will probably require more server smarts to better differentiate between host computers and thereby optimize the overall process. They need to find ways to keep people "in the game". I actually am skeptical that berkeley can handle the increasingly in-homogeneous host computer mix very well. We shall see. Personally speaking, despite my interest in the potential science of seti, I find myself reluctantly concurring with the initiator of this thread. I am very much less interested in the seti project now that my few computers have been rendered essentially worthless by cuda (ok, they are probably worthless in a Zen sort of way with or without cuda). But, I'm patient and loyal enough to see what berkeley's response is over the next couple of quarters. |
tullio Send message Joined: 9 Apr 04 Posts: 8797 Credit: 2,930,782 RAC: 1 |
I have a SUN WS without a graphic card, only a graphic chip and an audio chip. I am satisfied with it since it runs 24/7 since last January. If I installed a graphic card (I have 3 PCIe slots plus 3 PCI slots) the power consumption and the heat produced would increase sharply. I am running SETI, Einstein, QMC, CPDN, LHC and, since today, AQUA, meeting all deadlines. So I leave to speed fans to increase their processing power with CUDA. I like to walk at a steady pace, like a good alpinist (which I was) scaling a mountain. Tullio |
Francis Noel Send message Joined: 30 Aug 05 Posts: 452 Credit: 142,832,523 RAC: 94 |
I do not think this will be *the end* as such. Available computing power can be compared to available financial resources. When I was a young boy I had limited allowance so my spending was limited to chewing gum and the odd hockey trading card. At the time I was perfectly content with what I had, it suited my needs. One day I got older and started working part-time. The added income allowed me to acquire moar stuff and do moar things. As I progressed in life my income kept rising and so did my spending ! SETI Classic did some impressive stuff with what computing power was available back in '99. Then computers got faster, more resources were available, out came SETI Enhanced. CUDA is new, unoptimized and a bit raw but I'm pretty sure its a raise nonetheless. PhonAcq is right in stating that projects will eventually use this additional computing ressources one way or an other. WaveMaker does have a point that the installed resources might be too much for SAH. The project may not have sufficient oomph on the backend to feed work to everyone but then that is the idea behind BOINC : multiple projects. One other way to see it is that other scientific endeavors that are currently unthinkable because of a lack of computing resources, even with the current pool of BOINC crunchers, might become viable in the future and make the CUDA enable framework seem feeble. It could happen. What would you do if you won the lotto ? Lots of stuff you thought you would never be able to do maybe ? Yeah, me too :) mambo |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
I believe this is the end to Distributed Computing, as we have known it for a decade or so. Back in the days of SETI Classic, we got to a point where computers could outrun the servers -- something BOINC handles, and Classic didn't. The project responded by making the calculation more sensitive (and more difficult, thus taking longer). ... and I suspect that, given "N" as the available amount of computing, for any value of "N" there will always be a need for "N+1." |
Paul D. Buck Send message Joined: 19 Jul 00 Posts: 3898 Credit: 1,158,042 RAC: 0 |
I would also point out that when BOINC "started" there were 5 total projects active consuming our CPU cycles. I have participated in 57 projects with about 50 of them still active (or semi-active). Every so often a project dies or goes inactive due to a graduation or loss of funding. But, new projects come alive and additional work is created. I think the death announcement is a tad premature. |
Richard Walton Send message Joined: 21 May 99 Posts: 87 Credit: 192,735 RAC: 0 |
I believe this is the end to Distributed Computing, as we have known it for a decade or so. You are correct. That also means that little guys like me on my laptop that can not do Cuda will be out of it. If they make the calculations more time consuming to take advandatage of Cuda speed, I may as well stop- or spend more money I do not have to upgrade yet again. I am not complaining. It is inevitable, and the way computers in general go. You start with computer capacity. The programs expand to take advantage. Soon, you need more computing power to run basic stuff, etc. |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
Somebody once said the world will need only 5 computers. Let's not repeat that mistake again. I just enrolled in AQUA, which is simulating quantum computers. That is the next evolution in computing power, not CUDA. Oh, and you can't forget Mr. Bill Gates himself saying that nobody would ever need more than 1 megabyte of RAM.. Now we have Vista that will happily make a full gig disappear. Ironic. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
Somebody once said the world will need only 5 computers. Let's not repeat that mistake again. I just enrolled in AQUA, which is simulating quantum computers. That is the next evolution in computing power, not CUDA. Actually, Bill Gates never said that. It is a mis-quote that is often attributed to him for the past 20+ years. ...and if you disable SuperFetch and indexing, you can get Vista to operate in as little as 256MB of RAM or less. As for the topic, I'm amazed that people still sound the death knell for many things because of a change. One only needs to look at history to see what will really happen. ... and who cares if they expand the workunits to be larger/longer for CUDA? Why should this cause anyone to stop; just because the workunits take longer? Longer is a good thing and we should get out of the "instant gratification" mentality. I still remember when our computers took an entire week to process a single SETI workunit, and now we're complaining because AP takes a day or more? The human mind defines itself through the misery it experiences. It would seem some will never be happy. |
The Gas Giant Send message Joined: 22 Nov 01 Posts: 1904 Credit: 2,646,654 RAC: 0 |
I believe this is the end to Distributed Computing, as we have known it for a decade or so. I wouldn't be concerned just yet. My old 3.0GHz P4 with H/T only gets a RAC of 300 on Malaria Control, but using an optimised SETI app gets closer to 800 or an optimised MilkyWay app gets 1100 or so. There is still good work out there to be done with old(er) hardware - you just have to want to do it. And the beauty of BOINC is that you can! Live long and BOINC! Paul (S@H1 8888) And proud of it! |
Blurf Send message Joined: 2 Sep 06 Posts: 8962 Credit: 12,678,685 RAC: 0 |
I disagree with the idea that DC will dissapear any time real soon. Projects will still need work done so there will always be the need (at least for now) for work to be crunched. |
ML1 Send message Joined: 25 Nov 01 Posts: 20265 Credit: 7,508,002 RAC: 20 |
Actually, Bill Gates never said that. It is a mis-quote that is often attributed to him for the past 20+ years. ... So what is the correct quote or where should the quote come from? The human mind defines itself through the misery it experiences. It would seem some will never be happy. Yikes! That may well be the second only thing ever that we have mutually agreed upon! Our civilisation could well be far better with more forward looking and "can do" people rather than the morass of embitted negative backwards looking sentimentalists that we seem to suffer. We've never had life so easy... At least until Global (Heating) Warming cooks us further! (And yes, you can still be positive about that!) Like most things, Distributed Computing will continue to evolve to next work on whatever Big Problem is to hand at the time. Happy fast crunchin', Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
Jord Send message Joined: 9 Jun 99 Posts: 15184 Credit: 4,362,181 RAC: 3 |
Actually, Bill Gates never said that. It is a mis-quote that is often attributed to him for the past 20+ years. ... WikiQuote: Bill Gates. |
RandyC Send message Joined: 20 Oct 99 Posts: 714 Credit: 1,704,345 RAC: 0 |
Actually, Bill Gates never said that. It is a mis-quote that is often attributed to him for the past 20+ years. ... See Wikiquote entry. [edit]Ageless beat me by THAT much![/edit] |
Allie in Vancouver Send message Joined: 16 Mar 07 Posts: 3949 Credit: 1,604,668 RAC: 0 |
I think it unlikely that CUDA marks the end of DC. Consider, by itself, a graphic card is just a hunk of metal and plastic, It still needs a mobo, cpu, psu, etc, etc, all of which cost money. All projects run on limited budgets (else they wouldn’t need our help) so it is unlikely that they will suddenly run out and lay in 2000 or 20 000 or whatever, new computers with CUDA in order to run their projects. And remember this: the majority of BOINC users are of the set-and-forget variety. Even if they have CUDA compatable cards, they aren’t likely to go to the bother of downloading new drivers and going through all the techno-crap to get it to work. It is mostly just a handful of us lunatics who get overly involved in it. ;o) Samuel Clemens: “The rumours of my death have been greatly exaggerated.†Pure mathematics is, in its way, the poetry of logical ideas. Albert Einstein |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.