Message boards :
Number crunching :
AMD vs nVidia for Seti
Message board moderation
Author | Message |
---|---|
shizaru Send message Joined: 14 Jun 04 Posts: 1130 Credit: 1,967,904 RAC: 0 |
When I started the "GPU wars" thread, it was with this quote: "GCN would be AMD’s Fermi moment, where AMD got serious about GPU computing..." I jumped to the conclusion that these new (Graphic Core Next) cards would be far more "Seti-friendly" and that in a generation or two, Seti's Top Computers might be a mix of both GPUs instead of an nVidia monopoly. I thought new apps would have to be written for GCN and Kepler too. But now I'm not so sure... What's the dealio with these two? |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
Although GCN is already in the shops, Kepler isn't yet. I don't think either of the developers has a development sample trom their preferred range yet, and I doubt that either of them can afford to buy one, either. Let alone the testers..... |
shizaru Send message Joined: 14 Jun 04 Posts: 1130 Credit: 1,967,904 RAC: 0 |
Although GCN is already in the shops, Kepler isn't yet. Always appreciate your answers Richard, and to clarify I wasn't asking "Why aren't GCN/Kepler apps out yet!?":) Here are my questions: Do these new cards (from both) need new apps? Is GCN completely different at GPU computing, or just a fancy new name? Will GCN be substantially* better (in theory) than previous AMD cards at Seti? Does someone from nVidia have to come in (again) and rework the apps for Kepler? Why is nVidia so much better at Seti? (Is it Cuda? The architecture? Both?) I realize (most of) these questions can be answered in theory only (and that's what I'm looking for). I'm not looking for cold hard stats between GCN/Kepler. Just a better understanding of the two, always in relation to Seti. *And when I say subsantially, I mean performance gains other than those gained by a "simple" die-shrink. Edit: Also who are the "developers" and who are the "testers"? Seti crew & Lunatics crew? I have next to no idea of who does what for who and how, around here. Not a gripe of course, just stating a fact. |
Mike Send message Joined: 17 Feb 01 Posts: 34258 Credit: 79,922,639 RAC: 80 |
First of all i dont think we will need new apps but for better performance there will be some. So far i´m concerned GCN uses different technique for calculation. so actual apps will not benefit from this architecture. Raistmer is developing the ATI/AMD apps and Jason G is for cuda optimisation. With each crime and every kindness we birth our future. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
Does someone from nVidia have to come in (again) and rework the apps for Kepler? I don't think anybody really knows the answer to that - which is why I would advocate caution until the silicon hits the tarmac. I have vivid memories of the Fermi launch, two years ago. I think NVidia knew their previous two applications (608 and 609) were incompatible with Fermi, and supplied the replacement 610 application to the project. But the message didn't get through, and the penny didn't drop, at the project end - until I applied some brute force. Beta message 39386 We think NVidia learned their lesson over that one, and stopped using sneaky optimisation techniques that didn't conform to their own programming guidelines. But we won't know for certain until... NVidia have, we think, withdrawn from collaboration with SETI now that their initial publicity objectives have been achieved (and ATI never even started collaborating, but that's a different story). So we may be in for interesting times, again. I'll be trying to keep an eye on how it develops. |
shizaru Send message Joined: 14 Jun 04 Posts: 1130 Credit: 1,967,904 RAC: 0 |
Thanx again Richard. That more than covers the nVidia app question, and the Beta thread helped me understand your comment about Developers and Testers. So here's what's left for anybody that feals like jumping in: - Do the GCN cards need new apps? - Is GCN completely different at GPU computing (compared to last gen AMD GPUs), or just a fancy new name? - Will GCN be substantially* better (in theory) than previous AMD cards at Seti? - Why is nVidia so much better at Seti? (Is it Cuda? The architecture? Both? Or, in light of Richard's new info, is it just the fact that no-one from ATI/AMD has lent a helping hand?) *And when I say substantially, I mean performance gains other than those gained by a "simple" die-shrink. |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Thanx again Richard. That more than covers the nVidia app question, and the Beta thread helped me understand your comment about Developers and Testers. I suspect you are asking some questions that shall have no answer until either some users or optimizers have such hardware to test with. The Lunatics folks have already stated that they don't know if current apps will run correctly on Kepler, or if it will require another round of optimization to make them compatible. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
shizaru Send message Joined: 14 Jun 04 Posts: 1130 Credit: 1,967,904 RAC: 0 |
I suspect you are asking some questions that shall have no answer until either some users or optimizers have such hardware to test with. The Lunatics folks have already stated that they don't know if current apps will run correctly on Kepler, or if it will require another round of optimization to make them compatible. Yeah, I'm confusing everybody by bringing up optimizations. My questions were more about engineering/architecture. I'm going to take a long hard look at the detailed tech specs of my ION and Sten's 315m (which are based on the same chip apparently) and see if I can figure out what makes Seti "tick". Meanwhile, anybody want to have a shot at explaining why nVidia have been so much better at Seti than AMD, for the past couple of years? |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Uhh, maybe that they stepped up to the plate early on, and gave Seti and the optimizers the tools to make it so? AMD, not so. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
shizaru Send message Joined: 14 Jun 04 Posts: 1130 Credit: 1,967,904 RAC: 0 |
Fair enough. But what about board design and tech specs? Was there something that gave nVidia an advantage @seti up 'till now? |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Fair enough. But what about board design and tech specs? Was there something that gave nVidia an advantage @seti up 'till now? Better software all around: app (having helped the project get started), drivers, developer tools, language & libraries, as well as a solid bug reporting mechanism with actual feedback. On the hardware side. having kept the 'programming model' (a hardware architecture term) fairly consistent since G80 has helped, with fairly rational generational extensions for new features that mostly can either be used or not, so much older code will work without modification (though targeted tuning & optimisation is usually beneficial). This means development can continually improve instead of being bogged down with major rewrites of non-critical code every time. The Cuda core 'virtual architecture' concept is a fairly potent one, that has ramifications for hardware arrangement itself, & all the described software support aspects, kindof 'unifying' everything pretty well. jason "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Wiggo Send message Joined: 24 Jan 00 Posts: 34744 Credit: 261,360,520 RAC: 489 |
AMD looks like it'll have the edge over nVIDIA though when it comes to doing AP's. Cheers. |
shizaru Send message Joined: 14 Jun 04 Posts: 1130 Credit: 1,967,904 RAC: 0 |
850 "The Answer to the Ultimate Question of (Seti) Life, the (Seti) Universe, and Everything" It's obviously not very accurate but hey, cut me some slack... I haven't got 7.5 million years left in me to get it absolutely right:) Just take the magic number and multiply by the number given in the Relative Compute Performance column. Is the result anywhere near your GPU's RAC? Depending on your tolerance to variation, I think it might be pretty close. Funny how you set out to do one thing with numbers, and then they take you wherever they want. I was trying to decipher the tech specs of Sten's 315m GPU and my ION2, but couldn't quite nail it. Unless this new fad called ASIC quality has something to do with it, the best I can tell is that GPU core clock speed is the all important number. All else is (pretty much) equal when comparing our GPUs. Funny 'cause I thought it would be shader clock speed, but it's the same. Then one thing lead to another and hey presto: 850! Jason, thanx for the backstage Cuda report. Always appreciated:) |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.