Volta or Tesla?

Message boards : Number crunching : Volta or Tesla?
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile cliff west

Send message
Joined: 7 May 01
Posts: 211
Credit: 16,180,728
RAC: 15
United States
Message 1867487 - Posted: 15 May 2017, 18:49:58 UTC

so I was reading Tom's and they say this is not for gaming, but deep learning. Okay so what does that mean. Will this be something that will go in a desktop machine or server? Will Seti be able to get the most out of a GPU like this
ID: 1867487 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1867489 - Posted: 15 May 2017, 18:56:17 UTC - in response to Message 1867487.  

For now, the new Tensor cores do nothing for gaming. Of course that will change later when developers get a handle on just what the new architecture can do. I would assume that GPGPU development would occur first over gaming improvements since that is the next target audience after AI or autonomous driving development. It will depend as always on our volunteer developers to put the new architecture into play with any new apps that can utilize the new architectures.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1867489 · Report as offensive
Profile scocam
Avatar

Send message
Joined: 28 Feb 17
Posts: 27
Credit: 15,120,999
RAC: 0
United States
Message 1867524 - Posted: 15 May 2017, 22:13:51 UTC
Last modified: 15 May 2017, 22:15:14 UTC

I was reading up on NVIDIA's new Volta GV100 GPU architecture and I'm not sure where I read it or if I read it correctly... but I believe preliminary estimates are around $18k US. Looks like some brilliant architecture and I'm sure many of the big users will help bring consumer(!) costs down but I can't imagine it would be by enough to be within reach for many.

The architecture looks amazing and I'd be interested in reading what some of the brilliant minds around here think about it.


scocam
ID: 1867524 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1867533 - Posted: 16 May 2017, 0:07:37 UTC - in response to Message 1867524.  

I'm not sure if it will ever be in reach of consumer or maybe not even prosumer costs because of the very large size of the silicon die. Square millimeter size is almost directly proportional to die cost. Even with the next generations of feature size down to the 5-7nm levels, it is a big piece of silicon. Factor in the probable small production runs and I doubt if the manufacturer chip cost will ever get much below $2K. Well that is my $0.02 of crystal ball viewing ....
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1867533 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1867535 - Posted: 16 May 2017, 0:11:11 UTC

The likely use of the chip is going to be as the main compute device in autonomous driving platforms. And that technology is going to scale rapidly in the next 10 years. I think the GV100 is the replacement for the Nvidia DGX-1 system at $129,000.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1867535 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1867601 - Posted: 16 May 2017, 6:56:36 UTC

From part of my post at https://setiathome.berkeley.edu/forum_thread.php?id=80636&postid=1867598#1867598 :

...
[Side note:]
At the same time, as down with cold and digging into that pulsefind race, I found material on using Convolutional Neural Networks (CNNs) specifically for RF chirped signal feature recognition. That's significant because it has potential to recognise the features to a higher certainty than we're used to (~98%) across all chirp rates in the same pass. Even if only used as a prescan to sparsify/target traditional Chirp+fourier analyses, that form of AI is what the current architectures were built to do, and The next generation (Volta) supposedly has 'Tensor processors' in addition to normal Cuda cores. The rapid development in that direction may be too important to ignore for too much longer.

"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1867601 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1867603 - Posted: 16 May 2017, 7:09:49 UTC - in response to Message 1867533.  

I'm not sure if it will ever be in reach of consumer or maybe not even prosumer costs because of the very large size of the silicon die. Square millimeter size is almost directly proportional to die cost. Even with the next generations of feature size down to the 5-7nm levels, it is a big piece of silicon. Factor in the probable small production runs and I doubt if the manufacturer chip cost will ever get much below $2K. Well that is my $0.02 of crystal ball viewing ....



Much of the horsepower required for the AI development is in the training itself. Once the model parameters are established/learned, then production use requires much fewer resources, since runtime then becomes a function of dataset size and number of neurons, basically in the form of a number of filters. In then sense nVidia seems to have thought ahead putting those compute grids on the cloud for development use, and implementation for our purposes relatively straightforward for general purpose use. Not pretending to know absolute feasibility yet, but we do have a simplifying advantage of not requiring realtime processing (unlike vehicles).
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1867603 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1867607 - Posted: 17 May 2017, 2:04:51 UTC - in response to Message 1867603.  

In then sense nVidia seems to have thought ahead putting those compute grids on the cloud for development use, and implementation for our purposes relatively straightforward for general purpose use. Not pretending to know absolute feasibility yet, but we do have a simplifying advantage of not requiring realtime processing (unlike vehicles).

That is a very cogent point about not needing real-time processing for our computations.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1867607 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1867642 - Posted: 17 May 2017, 5:09:53 UTC
Last modified: 17 May 2017, 5:10:03 UTC

I had a post with a few pertinent quotes in it that got lost yesterday when the Web site went AWOL.
So here's the link without the synopsis.
Annantech's introduction to Volta.
Grant
Darwin NT
ID: 1867642 · Report as offensive
Profile betreger Project Donor
Avatar

Send message
Joined: 29 Jun 99
Posts: 11361
Credit: 29,581,041
RAC: 66
United States
Message 1867649 - Posted: 17 May 2017, 5:21:00 UTC - in response to Message 1867642.  

Ain't gona happen here for a long time, I'm 71, I wonder if I'll live long enough to own one.
ID: 1867649 · Report as offensive
Profile George 254
Volunteer tester

Send message
Joined: 25 Jul 99
Posts: 155
Credit: 16,507,264
RAC: 19
United Kingdom
Message 1867658 - Posted: 17 May 2017, 6:06:58 UTC - in response to Message 1867533.  

Keith
It will be interesting to see what happens in around 10 years time when Moore's Law apparently won't work anymore because as one writer put it "we run out of atoms"!
My tuppence worth
George
ID: 1867658 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1867661 - Posted: 17 May 2017, 6:39:21 UTC - in response to Message 1867658.  

Well they already running experimental fabs at 6nm using EUV lithography. Any smaller than about 4nm and everything is going to be quantum deterministic. You won't be able to build a conventional transistor because of quantum tunneling. They will have to move to light switches where they will have better luck controlling the quantum effects.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1867661 · Report as offensive

Message boards : Number crunching : Volta or Tesla?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.