Message boards :
Number crunching :
Volta or Tesla?
Message board moderation
Author | Message |
---|---|
cliff west Send message Joined: 7 May 01 Posts: 211 Credit: 16,180,728 RAC: 15 |
so I was reading Tom's and they say this is not for gaming, but deep learning. Okay so what does that mean. Will this be something that will go in a desktop machine or server? Will Seti be able to get the most out of a GPU like this |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
For now, the new Tensor cores do nothing for gaming. Of course that will change later when developers get a handle on just what the new architecture can do. I would assume that GPGPU development would occur first over gaming improvements since that is the next target audience after AI or autonomous driving development. It will depend as always on our volunteer developers to put the new architecture into play with any new apps that can utilize the new architectures. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
scocam Send message Joined: 28 Feb 17 Posts: 27 Credit: 15,120,999 RAC: 0 |
I was reading up on NVIDIA's new Volta GV100 GPU architecture and I'm not sure where I read it or if I read it correctly... but I believe preliminary estimates are around $18k US. Looks like some brilliant architecture and I'm sure many of the big users will help bring consumer(!) costs down but I can't imagine it would be by enough to be within reach for many. The architecture looks amazing and I'd be interested in reading what some of the brilliant minds around here think about it. scocam |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
I'm not sure if it will ever be in reach of consumer or maybe not even prosumer costs because of the very large size of the silicon die. Square millimeter size is almost directly proportional to die cost. Even with the next generations of feature size down to the 5-7nm levels, it is a big piece of silicon. Factor in the probable small production runs and I doubt if the manufacturer chip cost will ever get much below $2K. Well that is my $0.02 of crystal ball viewing .... Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
The likely use of the chip is going to be as the main compute device in autonomous driving platforms. And that technology is going to scale rapidly in the next 10 years. I think the GV100 is the replacement for the Nvidia DGX-1 system at $129,000. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
From part of my post at https://setiathome.berkeley.edu/forum_thread.php?id=80636&postid=1867598#1867598 : ... "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
I'm not sure if it will ever be in reach of consumer or maybe not even prosumer costs because of the very large size of the silicon die. Square millimeter size is almost directly proportional to die cost. Even with the next generations of feature size down to the 5-7nm levels, it is a big piece of silicon. Factor in the probable small production runs and I doubt if the manufacturer chip cost will ever get much below $2K. Well that is my $0.02 of crystal ball viewing .... Much of the horsepower required for the AI development is in the training itself. Once the model parameters are established/learned, then production use requires much fewer resources, since runtime then becomes a function of dataset size and number of neurons, basically in the form of a number of filters. In then sense nVidia seems to have thought ahead putting those compute grids on the cloud for development use, and implementation for our purposes relatively straightforward for general purpose use. Not pretending to know absolute feasibility yet, but we do have a simplifying advantage of not requiring realtime processing (unlike vehicles). "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
In then sense nVidia seems to have thought ahead putting those compute grids on the cloud for development use, and implementation for our purposes relatively straightforward for general purpose use. Not pretending to know absolute feasibility yet, but we do have a simplifying advantage of not requiring realtime processing (unlike vehicles). That is a very cogent point about not needing real-time processing for our computations. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13746 Credit: 208,696,464 RAC: 304 |
I had a post with a few pertinent quotes in it that got lost yesterday when the Web site went AWOL. So here's the link without the synopsis. Annantech's introduction to Volta. Grant Darwin NT |
betreger Send message Joined: 29 Jun 99 Posts: 11361 Credit: 29,581,041 RAC: 66 |
Ain't gona happen here for a long time, I'm 71, I wonder if I'll live long enough to own one. |
George 254 Send message Joined: 25 Jul 99 Posts: 155 Credit: 16,507,264 RAC: 19 |
Keith It will be interesting to see what happens in around 10 years time when Moore's Law apparently won't work anymore because as one writer put it "we run out of atoms"! My tuppence worth George |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Well they already running experimental fabs at 6nm using EUV lithography. Any smaller than about 4nm and everything is going to be quantum deterministic. You won't be able to build a conventional transistor because of quantum tunneling. They will have to move to light switches where they will have better luck controlling the quantum effects. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.