The End of Distributive Computing as we know it

Message boards : Number crunching : The End of Distributive Computing as we know it
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile Ace Casino
Avatar

Send message
Joined: 5 Feb 03
Posts: 285
Credit: 29,750,804
RAC: 15
United States
Message 843625 - Posted: 22 Dec 2008, 13:44:27 UTC

I believe this is the end to Distributed Computing, as we have known it for a decade or so.

If you can complete work in seconds, rather than minutes or hours…why keep us around? The WU’s I’ve seen have completed in 4 - 7 seconds with CUDA. The wingman that have been paired with the CUDA have taken anywhere from 1,000+ seconds to 3,000+ seconds.

SETI could take the $500,000 in donations per year, start buying NVIDIA cards, and do it there self a lot easier without the hassle from outside people.

Other “potential” projects will surely opt for do-it-yourself now, rather than setup a Boinc Project. Even if they do decide to use distributed computing it may be limited to 1,000 or 20,000 users, or whatever that magic number might be. In the future Projects with only the most “Massive” amounts of data would use distributed computing. Even these Projects may limit the number of people due to the data transfer limitations, data creation, or other limitations they may have.

The end of Distributive Computing as we know it may not happen overnight……but will likely happen in the next few years if NVIDIA or other types of drivers prove as, or more successful in crunching data. Computers from: home to Super, are getting faster and less expensive and are becoming easily affordable for almost anyone, especially a University or research group. There may always be a small niche for Distributed Computing in the future, but not on the scale it is now.

Think of yourself as a Columbus or Magellan, you helped pave the way….but even their trips came to an end.
ID: 843625 · Report as offensive
Sirius B Project Donor
Volunteer tester
Avatar

Send message
Joined: 26 Dec 00
Posts: 24879
Credit: 3,081,182
RAC: 7
Ireland
Message 843628 - Posted: 22 Dec 2008, 13:49:51 UTC - in response to Message 843625.  

I believe this is the end to Distributed Computing, as we have known it for a decade or so.

If you can complete work in seconds, rather than minutes or hours…why keep us around? The WU’s I’ve seen have completed in 4 - 7 seconds with CUDA. The wingman that have been paired with the CUDA have taken anywhere from 1,000+ seconds to 3,000+ seconds.

SETI could take the $500,000 in donations per year, start buying NVIDIA cards, and do it there self a lot easier without the hassle from outside people.

Other “potential” projects will surely opt for do-it-yourself now, rather than setup a Boinc Project. Even if they do decide to use distributed computing it may be limited to 1,000 or 20,000 users, or whatever that magic number might be. In the future Projects with only the most “Massive” amounts of data would use distributed computing. Even these Projects may limit the number of people due to the data transfer limitations, data creation, or other limitations they may have.

The end of Distributive Computing as we know it may not happen overnight……but will likely happen in the next few years if NVIDIA or other types of drivers prove as, or more successful in crunching data. Computers from: home to Super, are getting faster and less expensive and are becoming easily affordable for almost anyone, especially a University or research group. There may always be a small niche for Distributed Computing in the future, but not on the scale it is now.

Think of yourself as a Columbus or Magellan, you helped pave the way….but even their trips came to an end.


Yep, that old quote comes to mind - "All good things come to an end". However, I don't think that many universities/companies/researchers will waste their budget on machines to crunch their data when they can have a massive volunteer network doing the crunching free of charge. If they did it in house, think about their bills, with DC, they won't have that un-necessary outlay.

ID: 843628 · Report as offensive
Profile tullio
Volunteer tester

Send message
Joined: 9 Apr 04
Posts: 8797
Credit: 2,930,782
RAC: 1
Italy
Message 843635 - Posted: 22 Dec 2008, 14:05:39 UTC

Somebody once said the world will need only 5 computers. Let's not repeat that mistake again. I just enrolled in AQUA, which is simulating quantum computers. That is the next evolution in computing power, not CUDA.
Tullio
ID: 843635 · Report as offensive
Profile Byron S Goodgame
Volunteer tester
Avatar

Send message
Joined: 16 Jan 06
Posts: 1145
Credit: 3,936,993
RAC: 0
United States
Message 843639 - Posted: 22 Dec 2008, 14:14:04 UTC - in response to Message 843625.  
Last modified: 22 Dec 2008, 14:38:41 UTC

If you can complete work in seconds, rather than minutes or hours…why keep us around? The WU’s I’ve seen have completed in 4 - 7 seconds with CUDA. The wingman that have been paired with the CUDA have taken anywhere from 1,000+ seconds to 3,000+ seconds.


The tasks aren't being done in seconds by people using CUDA (Edit well maybe some of the -9 tasks ). That just shows the cpu time put in. For example when I do an average task with CUDA it shows mine being done in 2 to 3 minutes, but they are taking actual time of 2 to 3 hours.

Some are doing those tasks in minutes, but they are still quite a ways from being able to do them in seconds I think.
ID: 843639 · Report as offensive
Betting Slip

Send message
Joined: 25 Jul 00
Posts: 89
Credit: 716,008
RAC: 0
United Kingdom
Message 843644 - Posted: 22 Dec 2008, 14:33:49 UTC - in response to Message 843635.  

Somebody once said the world will need only 5 computers. Let's not repeat that mistake again. I just enrolled in AQUA, which is simulating quantum computers. That is the next evolution in computing power, not CUDA.
Tullio


It was the CEO of IBM no less.

Computers get faster, problems get bigger, what they are asked to do becomes more and computers get faster and so it goes on and on.
ID: 843644 · Report as offensive
Grey Shadow
Volunteer tester
Avatar

Send message
Joined: 26 Nov 08
Posts: 41
Credit: 139,654
RAC: 0
Russia
Message 843658 - Posted: 22 Dec 2008, 15:06:23 UTC

Don't worry.
Modern scientific projects require enormous computing powers. So even 10x increase of computing speed achieved by CUDA and similar technologies won't feed their hunger.
ID: 843658 · Report as offensive
PhonAcq

Send message
Joined: 14 Apr 01
Posts: 1656
Credit: 30,658,217
RAC: 1
United States
Message 843686 - Posted: 22 Dec 2008, 16:01:51 UTC

Cuda is one of those 'disruptive' technologies that Andy Grove boringly chants about. If the computing capacity increases, then seti (or any other project) can plan on doing a 'better' job. In effect, the wu's can become larger and/or longer now. Seti has done this at least once about a year ago, and the AP app is a second example. However, the distruption in available capacity appears gigantic, requiring a thoughtful response by seti. (Did I just hear someone say 'good luck'???)

The real issue at seti's doorstep is how to keep people interested. I am not going to run out and buy a cuda card(s) just to stay in the project. If there is a real-justification for it, then I will. However, my guess is that the vast majority of users can't justify the upgrade. There is a reason that Intel is the largest video chip supplier (cheap, average video performance, easy motherboard integration). Also, if you are a green fairy, then it would really be profane to buy a hot, erg-hungry monster add-on board for your word processor- internet browser computer, just to look for ET. (That is, cuda amplifies the violation of the distributed computing thesis that DC makes use of unused cycles that otherwise go to waste.)

Seti's resolution to this challenge will probably require more server smarts to better differentiate between host computers and thereby optimize the overall process. They need to find ways to keep people "in the game". I actually am skeptical that berkeley can handle the increasingly in-homogeneous host computer mix very well. We shall see.

Personally speaking, despite my interest in the potential science of seti, I find myself reluctantly concurring with the initiator of this thread. I am very much less interested in the seti project now that my few computers have been rendered essentially worthless by cuda (ok, they are probably worthless in a Zen sort of way with or without cuda). But, I'm patient and loyal enough to see what berkeley's response is over the next couple of quarters.
ID: 843686 · Report as offensive
Profile tullio
Volunteer tester

Send message
Joined: 9 Apr 04
Posts: 8797
Credit: 2,930,782
RAC: 1
Italy
Message 843689 - Posted: 22 Dec 2008, 16:10:08 UTC - in response to Message 843686.  

I have a SUN WS without a graphic card, only a graphic chip and an audio chip. I am satisfied with it since it runs 24/7 since last January. If I installed a graphic card (I have 3 PCIe slots plus 3 PCI slots) the power consumption and the heat produced would increase sharply. I am running SETI, Einstein, QMC, CPDN, LHC and, since today, AQUA, meeting all deadlines. So I leave to speed fans to increase their processing power with CUDA. I like to walk at a steady pace, like a good alpinist (which I was) scaling a mountain.
Tullio
ID: 843689 · Report as offensive
Profile Francis Noel
Avatar

Send message
Joined: 30 Aug 05
Posts: 452
Credit: 142,832,523
RAC: 94
Canada
Message 843705 - Posted: 22 Dec 2008, 17:17:08 UTC

I do not think this will be *the end* as such. Available computing power can be compared to available financial resources. When I was a young boy I had limited allowance so my spending was limited to chewing gum and the odd hockey trading card. At the time I was perfectly content with what I had, it suited my needs. One day I got older and started working part-time. The added income allowed me to acquire moar stuff and do moar things. As I progressed in life my income kept rising and so did my spending ! SETI Classic did some impressive stuff with what computing power was available back in '99. Then computers got faster, more resources were available, out came SETI Enhanced. CUDA is new, unoptimized and a bit raw but I'm pretty sure its a raise nonetheless. PhonAcq is right in stating that projects will eventually use this additional computing ressources one way or an other.

WaveMaker does have a point that the installed resources might be too much for SAH. The project may not have sufficient oomph on the backend to feed work to everyone but then that is the idea behind BOINC : multiple projects.

One other way to see it is that other scientific endeavors that are currently unthinkable because of a lack of computing resources, even with the current pool of BOINC crunchers, might become viable in the future and make the CUDA enable framework seem feeble. It could happen.

What would you do if you won the lotto ? Lots of stuff you thought you would never be able to do maybe ? Yeah, me too :)
mambo
ID: 843705 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 843711 - Posted: 22 Dec 2008, 17:33:17 UTC - in response to Message 843625.  

I believe this is the end to Distributed Computing, as we have known it for a decade or so.

If you can complete work in seconds, rather than minutes or hours…why keep us around? The WU’s I’ve seen have completed in 4 - 7 seconds with CUDA. The wingman that have been paired with the CUDA have taken anywhere from 1,000+ seconds to 3,000+ seconds.

SETI could take the $500,000 in donations per year, start buying NVIDIA cards, and do it there self a lot easier without the hassle from outside people.

Other “potential” projects will surely opt for do-it-yourself now, rather than setup a Boinc Project. Even if they do decide to use distributed computing it may be limited to 1,000 or 20,000 users, or whatever that magic number might be. In the future Projects with only the most “Massive” amounts of data would use distributed computing. Even these Projects may limit the number of people due to the data transfer limitations, data creation, or other limitations they may have.

The end of Distributive Computing as we know it may not happen overnight……but will likely happen in the next few years if NVIDIA or other types of drivers prove as, or more successful in crunching data. Computers from: home to Super, are getting faster and less expensive and are becoming easily affordable for almost anyone, especially a University or research group. There may always be a small niche for Distributed Computing in the future, but not on the scale it is now.

Think of yourself as a Columbus or Magellan, you helped pave the way….but even their trips came to an end.

Back in the days of SETI Classic, we got to a point where computers could outrun the servers -- something BOINC handles, and Classic didn't.

The project responded by making the calculation more sensitive (and more difficult, thus taking longer).

... and I suspect that, given "N" as the available amount of computing, for any value of "N" there will always be a need for "N+1."
ID: 843711 · Report as offensive
Profile Paul D. Buck
Volunteer tester

Send message
Joined: 19 Jul 00
Posts: 3898
Credit: 1,158,042
RAC: 0
United States
Message 843778 - Posted: 22 Dec 2008, 19:46:21 UTC

I would also point out that when BOINC "started" there were 5 total projects active consuming our CPU cycles. I have participated in 57 projects with about 50 of them still active (or semi-active).

Every so often a project dies or goes inactive due to a graduation or loss of funding. But, new projects come alive and additional work is created.

I think the death announcement is a tad premature.
ID: 843778 · Report as offensive
Profile Richard Walton

Send message
Joined: 21 May 99
Posts: 87
Credit: 192,735
RAC: 0
United States
Message 843812 - Posted: 22 Dec 2008, 21:09:10 UTC - in response to Message 843711.  

I believe this is the end to Distributed Computing, as we have known it for a decade or so.

If you can complete work in seconds, rather than minutes or hours…why keep us around? The WU’s I’ve seen have completed in 4 - 7 seconds with CUDA. The wingman that have been paired with the CUDA have taken anywhere from 1,000+ seconds to 3,000+ seconds.

SETI could take the $500,000 in donations per year, start buying NVIDIA cards, and do it there self a lot easier without the hassle from outside people.

Other “potential” projects will surely opt for do-it-yourself now, rather than setup a Boinc Project. Even if they do decide to use distributed computing it may be limited to 1,000 or 20,000 users, or whatever that magic number might be. In the future Projects with only the most “Massive” amounts of data would use distributed computing. Even these Projects may limit the number of people due to the data transfer limitations, data creation, or other limitations they may have.

The end of Distributive Computing as we know it may not happen overnight……but will likely happen in the next few years if NVIDIA or other types of drivers prove as, or more successful in crunching data. Computers from: home to Super, are getting faster and less expensive and are becoming easily affordable for almost anyone, especially a University or research group. There may always be a small niche for Distributed Computing in the future, but not on the scale it is now.

Think of yourself as a Columbus or Magellan, you helped pave the way….but even their trips came to an end.

Back in the days of SETI Classic, we got to a point where computers could outrun the servers -- something BOINC handles, and Classic didn't.

The project responded by making the calculation more sensitive (and more difficult, thus taking longer).

... and I suspect that, given "N" as the available amount of computing, for any value of "N" there will always be a need for "N+1."


You are correct. That also means that little guys like me on my laptop that can not do Cuda will be out of it. If they make the calculations more time consuming to take advandatage of Cuda speed, I may as well stop- or spend more money I do not have to upgrade yet again. I am not complaining. It is inevitable, and the way computers in general go. You start with computer capacity. The programs expand to take advantage. Soon, you need more computing power to run basic stuff, etc.

ID: 843812 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 843828 - Posted: 22 Dec 2008, 22:00:12 UTC - in response to Message 843644.  

Somebody once said the world will need only 5 computers. Let's not repeat that mistake again. I just enrolled in AQUA, which is simulating quantum computers. That is the next evolution in computing power, not CUDA.
Tullio


It was the CEO of IBM no less.

Computers get faster, problems get bigger, what they are asked to do becomes more and computers get faster and so it goes on and on.

Oh, and you can't forget Mr. Bill Gates himself saying that nobody would ever need more than 1 megabyte of RAM..

Now we have Vista that will happily make a full gig disappear. Ironic.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 843828 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 843839 - Posted: 22 Dec 2008, 22:37:42 UTC - in response to Message 843828.  

Somebody once said the world will need only 5 computers. Let's not repeat that mistake again. I just enrolled in AQUA, which is simulating quantum computers. That is the next evolution in computing power, not CUDA.
Tullio


It was the CEO of IBM no less.

Computers get faster, problems get bigger, what they are asked to do becomes more and computers get faster and so it goes on and on.

Oh, and you can't forget Mr. Bill Gates himself saying that nobody would ever need more than 1 megabyte of RAM..

Now we have Vista that will happily make a full gig disappear. Ironic.


Actually, Bill Gates never said that. It is a mis-quote that is often attributed to him for the past 20+ years. ...and if you disable SuperFetch and indexing, you can get Vista to operate in as little as 256MB of RAM or less.

As for the topic, I'm amazed that people still sound the death knell for many things because of a change. One only needs to look at history to see what will really happen.

... and who cares if they expand the workunits to be larger/longer for CUDA? Why should this cause anyone to stop; just because the workunits take longer? Longer is a good thing and we should get out of the "instant gratification" mentality. I still remember when our computers took an entire week to process a single SETI workunit, and now we're complaining because AP takes a day or more?

The human mind defines itself through the misery it experiences. It would seem some will never be happy.
ID: 843839 · Report as offensive
Profile The Gas Giant
Volunteer tester
Avatar

Send message
Joined: 22 Nov 01
Posts: 1904
Credit: 2,646,654
RAC: 0
Australia
Message 843841 - Posted: 22 Dec 2008, 22:41:28 UTC - in response to Message 843812.  

I believe this is the end to Distributed Computing, as we have known it for a decade or so.

If you can complete work in seconds, rather than minutes or hours…why keep us around? The WU’s I’ve seen have completed in 4 - 7 seconds with CUDA. The wingman that have been paired with the CUDA have taken anywhere from 1,000+ seconds to 3,000+ seconds.

SETI could take the $500,000 in donations per year, start buying NVIDIA cards, and do it there self a lot easier without the hassle from outside people.

Other “potential” projects will surely opt for do-it-yourself now, rather than setup a Boinc Project. Even if they do decide to use distributed computing it may be limited to 1,000 or 20,000 users, or whatever that magic number might be. In the future Projects with only the most “Massive” amounts of data would use distributed computing. Even these Projects may limit the number of people due to the data transfer limitations, data creation, or other limitations they may have.

The end of Distributive Computing as we know it may not happen overnight……but will likely happen in the next few years if NVIDIA or other types of drivers prove as, or more successful in crunching data. Computers from: home to Super, are getting faster and less expensive and are becoming easily affordable for almost anyone, especially a University or research group. There may always be a small niche for Distributed Computing in the future, but not on the scale it is now.

Think of yourself as a Columbus or Magellan, you helped pave the way….but even their trips came to an end.

Back in the days of SETI Classic, we got to a point where computers could outrun the servers -- something BOINC handles, and Classic didn't.

The project responded by making the calculation more sensitive (and more difficult, thus taking longer).

... and I suspect that, given "N" as the available amount of computing, for any value of "N" there will always be a need for "N+1."


You are correct. That also means that little guys like me on my laptop that can not do Cuda will be out of it. If they make the calculations more time consuming to take advandatage of Cuda speed, I may as well stop- or spend more money I do not have to upgrade yet again. I am not complaining. It is inevitable, and the way computers in general go. You start with computer capacity. The programs expand to take advantage. Soon, you need more computing power to run basic stuff, etc.

I wouldn't be concerned just yet. My old 3.0GHz P4 with H/T only gets a RAC of 300 on Malaria Control, but using an optimised SETI app gets closer to 800 or an optimised MilkyWay app gets 1100 or so. There is still good work out there to be done with old(er) hardware - you just have to want to do it. And the beauty of BOINC is that you can!

Live long and BOINC!

Paul
(S@H1 8888)
And proud of it!
ID: 843841 · Report as offensive
Profile Blurf
Volunteer tester

Send message
Joined: 2 Sep 06
Posts: 8962
Credit: 12,678,685
RAC: 0
United States
Message 843842 - Posted: 22 Dec 2008, 22:41:37 UTC

I disagree with the idea that DC will dissapear any time real soon. Projects will still need work done so there will always be the need (at least for now) for work to be crunched.


ID: 843842 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20265
Credit: 7,508,002
RAC: 20
United Kingdom
Message 843894 - Posted: 23 Dec 2008, 0:42:17 UTC - in response to Message 843839.  

Actually, Bill Gates never said that. It is a mis-quote that is often attributed to him for the past 20+ years. ...

So what is the correct quote or where should the quote come from?

The human mind defines itself through the misery it experiences. It would seem some will never be happy.

Yikes! That may well be the second only thing ever that we have mutually agreed upon!

Our civilisation could well be far better with more forward looking and "can do" people rather than the morass of embitted negative backwards looking sentimentalists that we seem to suffer. We've never had life so easy...

At least until Global (Heating) Warming cooks us further! (And yes, you can still be positive about that!)

Like most things, Distributed Computing will continue to evolve to next work on whatever Big Problem is to hand at the time.

Happy fast crunchin',
Martin

See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 843894 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 843897 - Posted: 23 Dec 2008, 0:50:07 UTC - in response to Message 843894.  

Actually, Bill Gates never said that. It is a mis-quote that is often attributed to him for the past 20+ years. ...

So what is the correct quote or where should the quote come from?

WikiQuote: Bill Gates.
ID: 843897 · Report as offensive
Profile RandyC
Avatar

Send message
Joined: 20 Oct 99
Posts: 714
Credit: 1,704,345
RAC: 0
United States
Message 843903 - Posted: 23 Dec 2008, 0:59:55 UTC - in response to Message 843894.  
Last modified: 23 Dec 2008, 1:01:17 UTC

Actually, Bill Gates never said that. It is a mis-quote that is often attributed to him for the past 20+ years. ...

So what is the correct quote or where should the quote come from?


See Wikiquote entry.

[edit]Ageless beat me by THAT much![/edit]
ID: 843903 · Report as offensive
Profile Allie in Vancouver
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 3949
Credit: 1,604,668
RAC: 0
Canada
Message 843919 - Posted: 23 Dec 2008, 1:17:53 UTC

I think it unlikely that CUDA marks the end of DC. Consider, by itself, a graphic card is just a hunk of metal and plastic, It still needs a mobo, cpu, psu, etc, etc, all of which cost money.

All projects run on limited budgets (else they wouldn’t need our help) so it is unlikely that they will suddenly run out and lay in 2000 or 20 000 or whatever, new computers with CUDA in order to run their projects.

And remember this: the majority of BOINC users are of the set-and-forget variety. Even if they have CUDA compatable cards, they aren’t likely to go to the bother of downloading new drivers and going through all the techno-crap to get it to work. It is mostly just a handful of us lunatics who get overly involved in it. ;o)

Samuel Clemens: “The rumours of my death have been greatly exaggerated.”
Pure mathematics is, in its way, the poetry of logical ideas.

Albert Einstein
ID: 843919 · Report as offensive
1 · 2 · Next

Message boards : Number crunching : The End of Distributive Computing as we know it


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.