AMD vs nVidia for Seti


log in

Advanced search

Message boards : Number crunching : AMD vs nVidia for Seti

Author Message
Profile Alex Storey
Volunteer tester
Avatar
Send message
Joined: 14 Jun 04
Posts: 536
Credit: 1,644,407
RAC: 574
Greece
Message 1208421 - Posted: 21 Mar 2012, 11:42:20 UTC

When I started the "GPU wars" thread, it was with this quote:

"GCN would be AMD’s Fermi moment, where AMD got serious about GPU computing..."

I jumped to the conclusion that these new (Graphic Core Next) cards would be far more "Seti-friendly" and that in a generation or two, Seti's Top Computers might be a mix of both GPUs instead of an nVidia monopoly. I thought new apps would have to be written for GCN and Kepler too. But now I'm not so sure...

What's the dealio with these two?

Richard HaselgroveProject donor
Volunteer tester
Send message
Joined: 4 Jul 99
Posts: 8465
Credit: 48,951,506
RAC: 75,693
United Kingdom
Message 1208429 - Posted: 21 Mar 2012, 12:16:51 UTC - in response to Message 1208421.

Although GCN is already in the shops, Kepler isn't yet.

I don't think either of the developers has a development sample trom their preferred range yet, and I doubt that either of them can afford to buy one, either. Let alone the testers.....

Profile Alex Storey
Volunteer tester
Avatar
Send message
Joined: 14 Jun 04
Posts: 536
Credit: 1,644,407
RAC: 574
Greece
Message 1208445 - Posted: 21 Mar 2012, 14:08:12 UTC - in response to Message 1208429.
Last modified: 21 Mar 2012, 14:39:22 UTC

Although GCN is already in the shops, Kepler isn't yet.

I don't think either of the developers has a development sample trom their preferred range yet, and I doubt that either of them can afford to buy one, either. Let alone the testers.....


Always appreciate your answers Richard, and to clarify I wasn't asking "Why aren't GCN/Kepler apps out yet!?":)

Here are my questions:

Do these new cards (from both) need new apps?
Is GCN completely different at GPU computing, or just a fancy new name?
Will GCN be substantially* better (in theory) than previous AMD cards at Seti?
Does someone from nVidia have to come in (again) and rework the apps for Kepler?
Why is nVidia so much better at Seti? (Is it Cuda? The architecture? Both?)

I realize (most of) these questions can be answered in theory only (and that's what I'm looking for). I'm not looking for cold hard stats between GCN/Kepler. Just a better understanding of the two, always in relation to Seti.

*And when I say subsantially, I mean performance gains other than those gained by a "simple" die-shrink.

Edit: Also who are the "developers" and who are the "testers"? Seti crew & Lunatics crew? I have next to no idea of who does what for who and how, around here. Not a gripe of course, just stating a fact.

Profile MikeProject donor
Volunteer tester
Avatar
Send message
Joined: 17 Feb 01
Posts: 23803
Credit: 32,627,367
RAC: 23,712
Germany
Message 1208453 - Posted: 21 Mar 2012, 15:10:00 UTC
Last modified: 21 Mar 2012, 15:10:45 UTC

First of all i dont think we will need new apps but for better performance there will be some.

So far i´m concerned GCN uses different technique for calculation. so actual apps will not benefit from this architecture.

Raistmer is developing the ATI/AMD apps and Jason G is for cuda optimisation.
____________

Richard HaselgroveProject donor
Volunteer tester
Send message
Joined: 4 Jul 99
Posts: 8465
Credit: 48,951,506
RAC: 75,693
United Kingdom
Message 1208476 - Posted: 21 Mar 2012, 16:10:15 UTC - in response to Message 1208445.

Does someone from nVidia have to come in (again) and rework the apps for Kepler?

I don't think anybody really knows the answer to that - which is why I would advocate caution until the silicon hits the tarmac.

I have vivid memories of the Fermi launch, two years ago. I think NVidia knew their previous two applications (608 and 609) were incompatible with Fermi, and supplied the replacement 610 application to the project. But the message didn't get through, and the penny didn't drop, at the project end - until I applied some brute force. Beta message 39386

We think NVidia learned their lesson over that one, and stopped using sneaky optimisation techniques that didn't conform to their own programming guidelines. But we won't know for certain until...

NVidia have, we think, withdrawn from collaboration with SETI now that their initial publicity objectives have been achieved (and ATI never even started collaborating, but that's a different story). So we may be in for interesting times, again. I'll be trying to keep an eye on how it develops.

Profile Alex Storey
Volunteer tester
Avatar
Send message
Joined: 14 Jun 04
Posts: 536
Credit: 1,644,407
RAC: 574
Greece
Message 1208513 - Posted: 21 Mar 2012, 18:06:31 UTC - in response to Message 1208476.

Thanx again Richard. That more than covers the nVidia app question, and the Beta thread helped me understand your comment about Developers and Testers.

So here's what's left for anybody that feals like jumping in:
- Do the GCN cards need new apps?
- Is GCN completely different at GPU computing (compared to last gen AMD GPUs), or just a fancy new name?
- Will GCN be substantially* better (in theory) than previous AMD cards at Seti?
- Why is nVidia so much better at Seti? (Is it Cuda? The architecture? Both? Or, in light of Richard's new info, is it just the fact that no-one from ATI/AMD has lent a helping hand?)

*And when I say substantially, I mean performance gains other than those gained by a "simple" die-shrink.

msattlerProject donor
Volunteer tester
Avatar
Send message
Joined: 9 Jul 00
Posts: 38923
Credit: 578,744,250
RAC: 514,850
United States
Message 1208517 - Posted: 21 Mar 2012, 18:11:50 UTC - in response to Message 1208513.

Thanx again Richard. That more than covers the nVidia app question, and the Beta thread helped me understand your comment about Developers and Testers.

So here's what's left for anybody that feals like jumping in:
- Do the GCN cards need new apps?
- Is GCN completely different at GPU computing (compared to last gen AMD GPUs), or just a fancy new name?
- Will GCN be substantially* better (in theory) than previous AMD cards at Seti?
- Why is nVidia so much better at Seti? (Is it Cuda? The architecture? Both? Or, in light of Richard's new info, is it just the fact that no-one from ATI/AMD has lent a helping hand?)

*And when I say substantially, I mean performance gains other than those gained by a "simple" die-shrink.


I suspect you are asking some questions that shall have no answer until either some users or optimizers have such hardware to test with. The Lunatics folks have already stated that they don't know if current apps will run correctly on Kepler, or if it will require another round of optimization to make them compatible.
____________
*********************************************
Embrace your inner kitty...ya know ya wanna!

I have met a few friends in my life.
Most were cats.

Profile Alex Storey
Volunteer tester
Avatar
Send message
Joined: 14 Jun 04
Posts: 536
Credit: 1,644,407
RAC: 574
Greece
Message 1209448 - Posted: 24 Mar 2012, 0:51:22 UTC - in response to Message 1208517.

I suspect you are asking some questions that shall have no answer until either some users or optimizers have such hardware to test with. The Lunatics folks have already stated that they don't know if current apps will run correctly on Kepler, or if it will require another round of optimization to make them compatible.


Yeah, I'm confusing everybody by bringing up optimizations. My questions were more about engineering/architecture. I'm going to take a long hard look at the detailed tech specs of my ION and Sten's 315m (which are based on the same chip apparently) and see if I can figure out what makes Seti "tick".

Meanwhile, anybody want to have a shot at explaining why nVidia have been so much better at Seti than AMD, for the past couple of years?

msattlerProject donor
Volunteer tester
Avatar
Send message
Joined: 9 Jul 00
Posts: 38923
Credit: 578,744,250
RAC: 514,850
United States
Message 1209449 - Posted: 24 Mar 2012, 0:55:35 UTC - in response to Message 1209448.


Meanwhile, anybody want to have a shot at explaining why nVidia have been so much better at Seti than AMD, for the past couple of years?

Uhh, maybe that they stepped up to the plate early on, and gave Seti and the optimizers the tools to make it so? AMD, not so.
____________
*********************************************
Embrace your inner kitty...ya know ya wanna!

I have met a few friends in my life.
Most were cats.

Profile Alex Storey
Volunteer tester
Avatar
Send message
Joined: 14 Jun 04
Posts: 536
Credit: 1,644,407
RAC: 574
Greece
Message 1209452 - Posted: 24 Mar 2012, 1:04:57 UTC - in response to Message 1209449.

Fair enough. But what about board design and tech specs? Was there something that gave nVidia an advantage @seti up 'till now?

Profile jason_geeProject donor
Volunteer developer
Volunteer tester
Avatar
Send message
Joined: 24 Nov 06
Posts: 4964
Credit: 73,105,212
RAC: 15,274
Australia
Message 1209472 - Posted: 24 Mar 2012, 2:22:33 UTC - in response to Message 1209452.
Last modified: 24 Mar 2012, 2:35:56 UTC

Fair enough. But what about board design and tech specs? Was there something that gave nVidia an advantage @seti up 'till now?


Better software all around: app (having helped the project get started), drivers, developer tools, language & libraries, as well as a solid bug reporting mechanism with actual feedback.

On the hardware side. having kept the 'programming model' (a hardware architecture term) fairly consistent since G80 has helped, with fairly rational generational extensions for new features that mostly can either be used or not, so much older code will work without modification (though targeted tuning & optimisation is usually beneficial). This means development can continually improve instead of being bogged down with major rewrites of non-critical code every time. The Cuda core 'virtual architecture' concept is a fairly potent one, that has ramifications for hardware arrangement itself, & all the described software support aspects, kindof 'unifying' everything pretty well.

jason
____________
"It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change."
Charles Darwin

Profile Wiggo
Avatar
Send message
Joined: 24 Jan 00
Posts: 6790
Credit: 93,090,960
RAC: 75,785
Australia
Message 1209520 - Posted: 24 Mar 2012, 4:47:31 UTC - in response to Message 1209472.

AMD looks like it'll have the edge over nVIDIA though when it comes to doing AP's.

Cheers.
____________

Profile Alex Storey
Volunteer tester
Avatar
Send message
Joined: 14 Jun 04
Posts: 536
Credit: 1,644,407
RAC: 574
Greece
Message 1210023 - Posted: 25 Mar 2012, 14:30:01 UTC

850

"The Answer to the Ultimate Question of (Seti) Life, the (Seti) Universe, and Everything"

It's obviously not very accurate but hey, cut me some slack... I haven't got 7.5 million years left in me to get it absolutely right:)

Just take the magic number and multiply by the number given in the Relative Compute Performance column. Is the result anywhere near your GPU's RAC? Depending on your tolerance to variation, I think it might be pretty close.

Funny how you set out to do one thing with numbers, and then they take you wherever they want. I was trying to decipher the tech specs of Sten's 315m GPU and my ION2, but couldn't quite nail it. Unless this new fad called ASIC quality has something to do with it, the best I can tell is that GPU core clock speed is the all important number. All else is (pretty much) equal when comparing our GPUs. Funny 'cause I thought it would be shader clock speed, but it's the same. Then one thing lead to another and hey presto: 850!

Jason, thanx for the backstage Cuda report. Always appreciated:)

Message boards : Number crunching : AMD vs nVidia for Seti

Copyright © 2014 University of California