My God, why hasn't nVidia noticed?

Message boards : Number crunching : My God, why hasn't nVidia noticed?
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile RottenMutt
Avatar

Send message
Joined: 15 Mar 01
Posts: 1011
Credit: 230,314,058
RAC: 0
United States
Message 1082845 - Posted: 2 Mar 2011, 4:24:27 UTC

top one thousand computer are running nvidia! top three hundred are sli, many tri sli and greater.

you would think they would help with app development. EVGA has a big folding team, these projects are definitely helping them sell gpu's!!!
ID: 1082845 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1082847 - Posted: 2 Mar 2011, 4:34:24 UTC - in response to Message 1082845.  

I have been a fan since my Triton Card.

Janice
ID: 1082847 · Report as offensive
Profile -= Vyper =-
Volunteer tester
Avatar

Send message
Joined: 5 Sep 99
Posts: 1652
Credit: 1,065,191,981
RAC: 2,537
Sweden
Message 1082876 - Posted: 2 Mar 2011, 7:54:41 UTC - in response to Message 1082845.  
Last modified: 2 Mar 2011, 8:01:39 UTC

nVidia HAS noticed.

The first s@h app if i don't recall wrong nvidia helped Berkeley in incorporating and they were quite fast with that *hrm*
Only about two years after the first CUDA 8800 GT* generation arrived late 2006 :)

P.S Hmm with that beeing said, we need some sponsors for showing the real deal of GPUs , the electricity for running those isn't cheap at all in my country. The cost is nearly 2,5 GTX 580's in price / year just powering it on for my top rig D.S

//Vyper

_________________________________________________________________________
Addicted to SETI crunching!
Founder of GPU Users Group
ID: 1082876 · Report as offensive
Profile Todd Hebert
Volunteer tester
Avatar

Send message
Joined: 16 Jun 00
Posts: 648
Credit: 228,292,957
RAC: 0
United States
Message 1082882 - Posted: 2 Mar 2011, 8:12:04 UTC

Agreed - nVidia has somewhat lost sight in assisting in areas of development for this project and are focusing attention to higher margins.

In the world of cpu or gpu design there are always going to be similar design approaches and released products across a development. There is little difference between an Intel 980X and a Xeon X5680 - they are the same chip with minor functions turned on or off. Same holds true for the GeForce/Quadro/Tesla cards.

In the past you could make a soft-Quadro out of a GeForce card by just manipulating the BIOS to report a different GPU and install the Quadro drivers - I have a card sitting here that I did just that so AutoCAD would run better - the extra functionality was already there and the driver took advantage of it.

Where nVidia is promoting the Tesla devices much more because of the higher margins - but overlook the crunchers of the world eventhough we buy a ton of their cards. They just happen to be consumer cards and not where the big money is.

Todd
ID: 1082882 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1082886 - Posted: 2 Mar 2011, 8:39:17 UTC
Last modified: 2 Mar 2011, 9:10:34 UTC

For my 2 cents, I feel that nVidia's support infrastructure for all areas of development is nearly unequalled, except perhaps exceeded understandbly by Microsoft's developers network, though not exactly requiring anywhere near that kind of scope.

As far as Initial development, correction, extension for the basic Fermi app goes, the goals/mandate, as my guess, would have been to get a reasonably stable & flexible application running that demonstrated the computational abilities & placed them at the head of the pack. It's achieved that, and further direct input would only really be 'required' again with another new architecture release.

Where things are headed with optimisation & 'polishing' as a 3rd party effort really uses techniques & practices that don't exist yet, and so go beyond either 'standard' coding or published recommendations in their 'Best Practices Guide (optimisation manual)' formulated by their engineers. That's science IMO rather than engineering. That's mostly a function of GPGPU computing specifically being a relatively new field, though drawing on decades of supercomputing computer sciences, and the incredible pace of advancement.

One good example is the seemingly obvious step of scaling Cuda kernels to operate most efficiently on the wide range of GPUs supported. Techniques are hinted at in guides & samples, but only experimentation through poking & prodding at different approaches really establishes the best outcome for a specific algorithm, something that's really outside the original (supposed) nVidia goals with aiding the app development in the first place. (and indeed multibeam cuda code thus far does no such thing.)

I'm sure both Raistmer and myself, as we get more and more proficient with both the hardware and tools involved, could even consider writing books on all the stuff that is 'missing'. Whether or not those efforts, & others, turn out to be worthy of extra special assistance, attention or recognition by the manufacturers in question, probably would be dictated by market demand more than anything.

It's easy to forget that as recently as 2 years ago it was possible to question if GPU supercompute could gain a foothold & become even a practical approach in a project like this one. I think that initial hurdle has been jumped and we're onto the next phase.

Jason
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1082886 · Report as offensive
Profile Fred J. Verster
Volunteer tester
Avatar

Send message
Joined: 21 Apr 04
Posts: 3252
Credit: 31,903,643
RAC: 0
Netherlands
Message 1082922 - Posted: 2 Mar 2011, 13:10:24 UTC - in response to Message 1082886.  
Last modified: 2 Mar 2011, 13:24:28 UTC

Looking at your hosts, you're using NVIDIA GeForce GTX 480
(1503MB) driver: 26658.

I'm using this driver on my XP64 box and I'am wondering, if there is
some change in computing times?
(NVIDIA GeForce GTX 480 (1535MB) driver: 26658) (Difference in memory amount
caused by WIN7 /64BIT?)

On one of my 2 Q6600, I still run NVIDIA GeForce GTX 470 (1279MB) driver: 19703.
And on my VISTA 32Bit Home version,I use NVIDIA GeForce GTS 250 (1009MB) driver: 19745.
Updating my driver in my XP32 bit pro box, with a 470, seems logical.

(Hope someone from ATI reads this :)

(And I'm tempted to try a 470 and an ATI 5870, in the same box, which also
means a more powerfull PSU, a 650Watt type is a bit on the edge,
this box had a 4850 & 5870 ATI GPU and used ~550Watt, running MW and
Collatz C. and also had to use a cc_config.xml file and extended the desktop with a dummy load or monitor)

I remember that someone tried this, but don't know on what mobo and O.S.?
ID: 1082922 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1082926 - Posted: 2 Mar 2011, 13:25:38 UTC - in response to Message 1082922.  
Last modified: 2 Mar 2011, 13:26:32 UTC

I'm using this driver on my XP64 box and I'am wondering, if there is
some change in computing times?


In my case not a lot, but the drivers have appeared to get progessively more reliable for my 480... for me uptime & reliability trumps raw throughput with lots of errors.

Very much on topic for the thread: Pre- WDDM (i,e, XPDM running on Windows XP ) Drivers are faster for *existing* applications (where they run the apps that is. )

..But.. (and it's a big one :) ) ... Win7/WDDM performance in some cases overtakes XP/XPDM throughput if the code is changed to use newer coding techniques. That means you will likely see a lot of long held preconceptions jostled around, as the Fermi situation stabilises & the next architecture appears on the horizon.

Jason
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1082926 · Report as offensive
Profile skildude
Avatar

Send message
Joined: 4 Oct 00
Posts: 9541
Credit: 50,759,529
RAC: 60
Yemen
Message 1082931 - Posted: 2 Mar 2011, 13:43:59 UTC

heh I just lifted my little finger. I've now done more than ATI has in creating apps for seti.


In a rich man's house there is no place to spit but his face.
Diogenes Of Sinope
ID: 1082931 · Report as offensive
Profile RottenMutt
Avatar

Send message
Joined: 15 Mar 01
Posts: 1011
Credit: 230,314,058
RAC: 0
United States
Message 1082934 - Posted: 2 Mar 2011, 14:04:46 UTC

you mentioned ati, there is one rig in the top 300!!!
ID: 1082934 · Report as offensive
-BeNt-
Avatar

Send message
Joined: 17 Oct 99
Posts: 1234
Credit: 10,116,112
RAC: 0
United States
Message 1082959 - Posted: 2 Mar 2011, 16:13:13 UTC

GPU manufacturers priorities are not Seti@Home.

ATi OpenCL:
AMD recently released the ATI Stream v2.0 SDK , which includes one of the first complete OpenCL runtimes, allowing software developers to implement data- and task-parallel computations using a single, unified programming model.


Nvidia Cuda:
Computing is evolving from "central processing" on the CPU to "co-processing" on the CPU and GPU. To enable this new computing paradigm, NVIDIA invented the CUDA parallel computing architecture that is now shipping in GeForce, ION, Quadro, and Tesla GPUs, representing a significant installed base for application developers.


Nowhere does it say ATi or Nvidia will be working on projects for people. Hence the reason we have people from the community building the apps. I figure if you throw some cash at either company they would be more than willing to work with any project to get a finely tuned app for the various cards. However they give you the building block to do what you need to get it working, they don't say they will get it working for you.

The main reason Nvidia jumped on board in late 2008 and helped Berkley develop the Cuda app was because ATi didn't have anything like it available and it was one more selling point for their products. And it worked, not to say Nvidia and ATi don't have interest in developing better apps for their cards to run our projects, however it doesn't pay as well or at all for that matter. They know you will still be buying their products regardless.
Traveling through space at ~67,000mph!
ID: 1082959 · Report as offensive
Profile skildude
Avatar

Send message
Joined: 4 Oct 00
Posts: 9541
Credit: 50,759,529
RAC: 60
Yemen
Message 1083019 - Posted: 2 Mar 2011, 20:27:11 UTC - in response to Message 1082959.  

umm you forget that Nvidia went out of their way to create the CUDA app just for seti@home. ATI has done nothing. So in the bigger scheme of things Nvidia has made a personal effort to help sell more GPU's by making their app. ATI has again done nothing towards that


In a rich man's house there is no place to spit but his face.
Diogenes Of Sinope
ID: 1083019 · Report as offensive
Profile Joel

Send message
Joined: 31 Oct 08
Posts: 104
Credit: 4,838,348
RAC: 13
United States
Message 1083055 - Posted: 2 Mar 2011, 21:37:33 UTC - in response to Message 1082959.  


Nowhere does it say ATi or Nvidia will be working on projects for people. Hence the reason we have people from the community building the apps. I figure if you throw some cash at either company they would be more than willing to work with any project to get a finely tuned app for the various cards. However they give you the building block to do what you need to get it working, they don't say they will get it working for you.


Sure, of course you wouldn't expect a hardware company to come along and write software for every project or company who wants help. But Nvidia's help to SETI@home a few years ago was an excellent demonstration of how powerful their technology is. We've all bought a bunch of fancy GPUs as a result. AMD/ATI have lagged behind a bit in the GPGPU realm, and now that they are catching up in making it possible with OpenCL, it might behoove them to do like Nvidia did and generate some buzz and demonstrate that their technology is just as good or better. It's not like they're evil for not doing so, or anything, but I do think it would help them raise their profile by showing how good their GPUs can be for this project. And they probably would sell some more cards, too. We've got Raistmer et al working hard on making ATI GPUs work, but that's still a bit of a hassle, so for now most of us who don't also play games are sticking with Nvidia.
ID: 1083055 · Report as offensive
-BeNt-
Avatar

Send message
Joined: 17 Oct 99
Posts: 1234
Credit: 10,116,112
RAC: 0
United States
Message 1083174 - Posted: 3 Mar 2011, 10:21:12 UTC - in response to Message 1083019.  
Last modified: 3 Mar 2011, 10:22:33 UTC

umm you forget that Nvidia went out of their way to create the CUDA app just for seti@home. ATI has done nothing. So in the bigger scheme of things Nvidia has made a personal effort to help sell more GPU's by making their app. ATI has again done nothing towards that


They aided in the development of the app for Seti@Home yes, the app, Cuda was not specifically designed for Seti@Home. Read much?

The main reason Nvidia jumped on board in late 2008 and helped Berkley develop the Cuda app was because ATi didn't have anything like it available and it was one more selling point for their products.



Sure, of course you wouldn't expect a hardware company to come along and write software for every project or company who wants help. But Nvidia's help to SETI@home a few years ago was an excellent demonstration of how powerful their technology is. We've all bought a bunch of fancy GPUs as a result.


Except for the minority at the upper end that have bought numerous cards, I would dare say most of the cards, like mine, are purchased for gaming and only crunch Seti@Home because they can. This can digress all the way back into the whole majority crunches more than the vocal minority at the top, per say, and many of those machines don't even have gpus, much less high end ones.

AMD/ATI have lagged behind a bit in the GPGPU realm, and now that they are catching up in making it possible with OpenCL, it might behoove them to do like Nvidia did and generate some buzz and demonstrate that their technology is just as good or better.


I agree it would be good for them to show the power of their cards, but realistically they get more word out to the people who buy these cards from benchmark sites. If a site has more than 150,000 visits to their review of the card that would match what we have in users. Dare to guess how many review sites are out there and how many hits combined that is? All the while it only being about 5-10 cards being passed around. The money return on sending those cards out versus hiring a programmer to extensively work on a single project just isn't smart.

As of right now there are 100x the gamers on Steam than there is in the entire Seti@Home project. (1.6 million compared to 150 thousand.) Like I said in my last post I know it would be awesome but in the real world Seti@Home and combined crunching just isn't a mass target for either of them.

Now if you want to spend a couple billion and build a super computer with teslas or ATi's and do some kind of research that you make money on, I'm sure they will jump on board and give expertise and time.

It's not like they're evil for not doing so, or anything, but I do think it would help them raise their profile by showing how good their GPUs can be for this project. And they probably would sell some more cards, too. We've got Raistmer et al working hard on making ATI GPUs work, but that's still a bit of a hassle, so for now most of us who don't also play games are sticking with Nvidia.


I agree completely there is a business reason they aren't....or better yet has anyone really asked them to help? Going back to my other point they don't care about how well they look inside of Seti@Home out of all the users here who use ATi you probably make up well under 0.5% of their market. Raistmer is doing phenomenal work, and he is doing exactly like OpenCL and Cuda was setup for, an independent developer making the card perform parallel task on the card using a common open platform for compatibility. We have access to play on the grounds of people spending billions on super computers, the least they ask is we do our own work.
Traveling through space at ~67,000mph!
ID: 1083174 · Report as offensive

Message boards : Number crunching : My God, why hasn't nVidia noticed?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.