GPUs: AMD vs nVidia vs The Rest of The World

Message boards : Number crunching : GPUs: AMD vs nVidia vs The Rest of The World
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20252
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1443233 - Posted: 17 Nov 2013, 1:50:43 UTC

nVidia look to be up to their continuing anti-FLOSS stance by trying to cripple the main free-open-source compiler system with some very nVidia-only specific code for CUDA-only hardware... No wonder Linus gave them the single-digit symbol.

Rather than suffer such silliness from nVidia...


How do AMD or others now compare for general purpose GPU (parallel) computing?


Happy super-fast GPU crunchin',
Martin

See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1443233 · Report as offensive
Profile James Sotherden
Avatar

Send message
Joined: 16 May 99
Posts: 10436
Credit: 110,373,059
RAC: 54
United States
Message 1443240 - Posted: 17 Nov 2013, 1:59:51 UTC

I read your link. Not being able to understand geek tech, I have no clue what the hell they are talking about.
So please do me and the other non geeks who are reading this a favor, And say in plain English why you think this is a bad deal for the world.
[/quote]

Old James
ID: 1443240 · Report as offensive
Profile arkayn
Volunteer tester
Avatar

Send message
Joined: 14 May 99
Posts: 4438
Credit: 55,006,323
RAC: 0
United States
Message 1443245 - Posted: 17 Nov 2013, 2:12:28 UTC - in response to Message 1443240.  

I read your link. Not being able to understand geek tech, I have no clue what the hell they are talking about.
So please do me and the other non geeks who are reading this a favor, And say in plain English why you think this is a bad deal for the world.


Replies to the article.
http://phoronix.com/forums/showthread.php?88288-NVIDIA-Mentor-Graphics-May-Harm-GCC

ID: 1443245 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20252
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1443249 - Posted: 17 Nov 2013, 2:22:21 UTC - in response to Message 1443240.  
Last modified: 17 Nov 2013, 2:42:26 UTC

I read your link. Not being able to understand geek tech, I have no clue what the hell they are talking about.

So please do me and the other non geeks who are reading this a favor, And say in plain English why you think this is a bad deal for the world.

OK... That article is rather deeply technical for what I see as rather a fiendish lock-out trick being foisted...


Summary:

For what nVidia look to be pushing/foisting, there will be a hideous waste of time wasted for gcc to support something highly vendor specific. That is the sort of thing that should remain hidden within the vendor if that vendor cannot abide open standards. Personally, I view their strategy as an act of sabotage.



For the further detail: So... (Deep breath! ;-) )

"gcc" (GNU Compiler Collection) is a FLOSS compiler suite that is very widely used. It's been around since the dawn of widespread computing. The license insists on freedom.

To compile a program, you have some high level language such as "C" or "BASIC" and gcc will compile that into "Assembler" and binary code that your computer CPU can then directly run/execute.

What nVidia is foisting is to have extensions added to gcc to enable parallel programming of GPUs in general. OK so far. Do your code and then you can execute that on a GPU, all good and as it should be. Except... That is NOT the case. Instead, nVidia look to be foisting gcc additions that will compile your code only to their proprietary intermediary "PTX" code. You then must use their proprietary encoders to convert the "PTX" into the final GPU code to be usable.

And with that, you can bet that nVidia will poisonously contrive the high-level instructions to lock you into using only their GPU architecture... Also note that the "PTX" intermediate instructions can easily be contrived to be very nVidia-hardware specific to give all other vendors an unnecessary nightmare for their GPU hardware... Note also that just to support "PTX" as a compiler option in gcc may well 'poison' surrounding gcc code merely by having to adopt nVidia's specific architecture.


That's all fine if they are honest and open about it and do not abuse gcc to shackle the other vendors.

Meanwhile, an important part of open standards is that EVERYONE can freely use those standards for the advantage of everyone.

(Instead, there looks to be far too much vandalism for the sake of vendors trying to lock down the market to be compatible singularly to their products only...)

Regardless, there will be a hideous waste of time wasted for gcc to support something which should remain hidden within the vendor if that vendor cannot abide open standards.


So, rather than perpetuate what I see as continuing vandalism: Do we have a useful choice elsewhere for hardware and more compatible software support?

Happy super-fast crunchin',
Martin

(All just my humble personal opinion as always... And Christmas approaches! :-) )
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1443249 · Report as offensive
Profile James Sotherden
Avatar

Send message
Joined: 16 May 99
Posts: 10436
Credit: 110,373,059
RAC: 54
United States
Message 1443262 - Posted: 17 Nov 2013, 2:58:32 UTC

Well I had a hard time with that explanation. But If I understand you right. Your saying that if the vender such as EVGA,Gigabyte, or who ever makes a licensed copy of a NVIDIA GPU dosent agree with what ever tech speak you stated, Might not be able to make a GPU? Or if they do, It might not work great because of the new NVIDIA code or process?
[/quote]

Old James
ID: 1443262 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20252
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1443273 - Posted: 17 Nov 2013, 3:38:46 UTC - in response to Message 1443262.  
Last modified: 17 Nov 2013, 3:44:22 UTC

... Might not be able to make a GPU? Or if they do, It might not work great because of the new NVIDIA code or process?

My understanding is that if you are not nVidia, then indeed so.

Or, at the very least as I understand it, should gcc become hobbled for nVidia in that way, all other GPU vendors may then be put to extra expense in various ways or may have use of their products restricted in various ways. All at the cost of/to their customers...


All a game of the vandalism of design for incompatibility...

IT is what we allow it to be...
Martin

(All just my humble opinion...)
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1443273 · Report as offensive
Profile James Sotherden
Avatar

Send message
Joined: 16 May 99
Posts: 10436
Credit: 110,373,059
RAC: 54
United States
Message 1443277 - Posted: 17 Nov 2013, 3:56:06 UTC
Last modified: 17 Nov 2013, 3:56:33 UTC

OK. I can see your point. To a degree. But where is your outcry over the fact that car maunfacturers have been that way since they have been in existance? I mean you cant go to an auto parts store and by brakes for 2003 BMW and expect them to fit on your 73 VW can you?

Sounds like another linux sour grape issue to me.
[/quote]

Old James
ID: 1443277 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20252
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1443284 - Posted: 17 Nov 2013, 4:13:03 UTC - in response to Message 1443277.  
Last modified: 17 Nov 2013, 4:13:53 UTC

OK. I can see your point. To a degree. But where is your outcry over the fact that car maunfacturers have been that way since they have been in existance? I mean you cant go to an auto parts store and by brakes for 2003 BMW and expect them to fit on your 73 VW can you?

Sounds like another linux sour grape issue to me.

Poor example.

It is more like a major car manufacturer franchising the gas stations to fit special high-speed filler hoses specially to fit only that manufacturer's cars.

If you buy that manufacturer's cars, then you get to fill real quick.

Any other vehicle and you're left waiting whilst the gas trickles in (suspiciously unnaturally) slowly...


(Hey! And people do complain and the Trading Standards people do sue the car manufacturers for anti-competitive practices for some standard frequently replaced parts such as tyres and brakes and fluids...)

All a question of freedom and coercion...


Note that "gcc" is a constantly updated 30+ year old essential part of the IT world that is used by most of the world as well as just nVidia...


IT is what we allow it to be...
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1443284 · Report as offensive
Profile James Sotherden
Avatar

Send message
Joined: 16 May 99
Posts: 10436
Credit: 110,373,059
RAC: 54
United States
Message 1443293 - Posted: 17 Nov 2013, 4:35:47 UTC - in response to Message 1443284.  

OK. I can see your point. To a degree. But where is your outcry over the fact that car maunfacturers have been that way since they have been in existance? I mean you cant go to an auto parts store and by brakes for 2003 BMW and expect them to fit on your 73 VW can you?

Sounds like another linux sour grape issue to me.

Poor example.

It is more like a major car manufacturer franchising the gas stations to fit special high-speed filler hoses specially to fit only that manufacturer's cars.

If you buy that manufacturer's cars, then you get to fill real quick.

Any other vehicle and you're left waiting whilst the gas trickles in (suspiciously unnaturally) slowly...


(Hey! And people do complain and the Trading Standards people do sue the car manufacturers for anti-competitive practices for some standard frequently replaced parts such as tyres and brakes and fluids...)

All a question of freedom and coercion...


Note that "gcc" is a constantly updated 30+ year old essential part of the IT world that is used by most of the world as well as just nVidia...


IT is what we allow it to be...
Martin

And they still make different brakes and fluids and other crap still, After all these years.
So what does crying on the inetrnet do about it? Nothing!
Maybe linux should go into politics. Everything would be shared and free.
[/quote]

Old James
ID: 1443293 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20252
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1443297 - Posted: 17 Nov 2013, 4:47:19 UTC - in response to Message 1443293.  
Last modified: 17 Nov 2013, 4:47:51 UTC

... So what does crying on the inetrnet do about it? Nothing!...

Fortunately, you're proved wrong on that every day for many examples.


Perhaps a more everyday readable summary is eloquantly given in this comments post about the nVidia approach:

...Oddly enough, the proposed exploitees don't care much for this approach.


I wouldn't be surprised if nVidia get the finger from a few more people other than Linus...

IT is what we allow it to be...
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1443297 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20252
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1443298 - Posted: 17 Nov 2013, 4:51:43 UTC

... Which comes back to...

How do such as AMD, Intel, and any others compare for using GPUs for compute tasks and for Boinc?

How do the AMD "APUs" compare? Or do they pale alongside discrete GPUs?

Have AMD leapfrogged the nVidia CUDA architecture?


Or rather, any comments for jumping to AMD for my next GPU?


Happy super-fast crunchin',
Martin

See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1443298 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1443302 - Posted: 17 Nov 2013, 5:02:54 UTC - in response to Message 1443298.  

From what I've seen, the incorporated processor GPUs are about equal to low end discrete GPU cards. The handwriting is on the wall. Soon there won't be any nVidia chipsets on motherboards. Your basic Computer will come with either Intel or AMD graphics. nVidia will lose that market. It won't be long before nVidia becomes a niche player in the high end market catering to those few who still use a computer to play games. Just my opinion, I could be wrong :-)
ID: 1443302 · Report as offensive
Profile James Sotherden
Avatar

Send message
Joined: 16 May 99
Posts: 10436
Credit: 110,373,059
RAC: 54
United States
Message 1443303 - Posted: 17 Nov 2013, 5:15:39 UTC

It really dosent matter. And if it matters to you so bad then maybe linux can produce a GPU of thier own. Yeah theres the ticket. Get with SETI@HOME and design your own linux crunching code for your very own linux GPU. I bet youd sell millions. And make sure that you could only run linux what ever edition on it though.

My question is still. If linux is so good why cant you even give it away?
[/quote]

Old James
ID: 1443303 · Report as offensive
spitfire_mk_2
Avatar

Send message
Joined: 14 Apr 00
Posts: 563
Credit: 27,306,885
RAC: 0
United States
Message 1443304 - Posted: 17 Nov 2013, 5:27:22 UTC

I bought nVidia because it has superior support when used for seti.
ID: 1443304 · Report as offensive
Profile betreger Project Donor
Avatar

Send message
Joined: 29 Jun 99
Posts: 11361
Credit: 29,581,041
RAC: 66
United States
Message 1443310 - Posted: 17 Nov 2013, 6:28:06 UTC

I bought nVidea again because of the excellent support they have given me in the past.
ID: 1443310 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1443334 - Posted: 17 Nov 2013, 9:51:13 UTC
Last modified: 17 Nov 2013, 9:52:27 UTC

Interesting article and responses on those links, and certainly important to watch these things in a commercial world.

My main query/concern with the whole thing is that, opposite to as implied in the article, Cuda compilers transitioned successfully to an LLVM based implementation back in Cuda 4, and OpenACC is a high level extension [high level being of limited interest to me at this time]. Once LLVM intermediate representations are generated, then any backend may be created/used. Maybe they want nVidia to make a backend for AMD GPUs too? unlikely, and supposedly they already have one, right ?

While the concerns that GCC, and the C++/C languages in general, arguably form the basis for 'a lot', with respect to parallelism, multithreading and emergent heterogeneous capability, they have seriously stagnated.

Maybe it takes someone to boldy step into the hornets nest of stale arcana, stir it around a bit, and trigger the rest of us to make something better.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1443334 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1443380 - Posted: 17 Nov 2013, 15:08:35 UTC - in response to Message 1443249.  

Nvidia say that they won't support OpenCL any further and go concentrate on their own CUDA instead. The suspected reasoning behind this being that Nvidia refuses to give other hardware manufacturers (read: Intel, AMD) an easy time using Nvidia's code implemented into the OpenCL compiler. So much then for the Khronos initiative: The Khronos Group is a not for profit industry consortium creating open standards for the authoring and acceleration of parallel computing, graphics, dynamic media, computer vision and sensor processing on a wide variety of platforms and devices. All Khronos members are able to contribute to the development of Khronos API specifications, are empowered to vote at various stages before public deployment, and are able to accelerate the delivery of their cutting-edge 3D platforms and applications through early access to specification drafts and conformance tests.

Minus Nvidia then. They refuse to update above OpenCL 1.1 or be part of an open standard for parallel computing. Proprietary CUDA FTW!

And so the really easy thing for the user base and community to do is to just ignore Nvidia products and go with other manufacturers that offer OpenCL processing. It may not be as fast yet as CUDA is, but give it all the attention and processing power, and that'll fix itself quite easily. Especially when Nvidia finds they don't sell their products anymore, and so their CUDA development will fall behind.

It won't be the first time that the community can make or break a case like this. Anyone here remember the Matrox card you used to play games with on 2 to 6 monitors at the same time? That was well before ATI or Nvidia could do it. Until they managed to copy the hardware (well, in the case of ATI, it was steal the tech...) and produce it in bigger quantities and at cheaper prices. The community turned away from Matrox and now they're just a distant memory, having left the consumer market completely.

I'm not advocating that should be done to Nvidia, but at the same time, they might be taught a lesson.
ID: 1443380 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1443398 - Posted: 17 Nov 2013, 16:21:19 UTC - in response to Message 1443380.  

Hahaha, yeah I can see how it might be tough for Intel and AMD to come to terms with that they both build inferior compilers to the open source LLVM project, and want nVidia's help while having shoved them out of the x86 market completely.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1443398 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1443401 - Posted: 17 Nov 2013, 16:28:34 UTC
Last modified: 17 Nov 2013, 16:32:59 UTC

Actually nVidia has even no proper OpenCL 1.0 support.
Very nasty bug (feature?) was discovered recently in attempt to reduce CPU usage of OpenCL NV AP app: asynchronous buffer reads actually done as synchronous ones.
Bug was filed via nVidia CUDA registered developer program.
I got response with request for test case and thorough explanation of what is buggy behavior in this case. Explanations and test case were provided more than week ago - no signs of progress from that time.

EDIT: BTW, OpenCL 2.0 standard will have similar abilities for launching kernels right from GPU code, w/o host intervence, as recent CUDA has. That is, nVidia has hardware that comply. Pity they refuse to provide proper OpenCL support then.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1443401 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1443453 - Posted: 17 Nov 2013, 19:15:33 UTC - in response to Message 1443401.  

Actually nVidia has even no proper OpenCL 1.0 support.
Very nasty bug (feature?) was discovered recently in attempt to reduce CPU usage of OpenCL NV AP app: asynchronous buffer reads actually done as synchronous ones.
Bug was filed via nVidia CUDA registered developer program.
I got response with request for test case and thorough explanation of what is buggy behavior in this case. Explanations and test case were provided more than week ago - no signs of progress from that time.

EDIT: BTW, OpenCL 2.0 standard will have similar abilities for launching kernels right from GPU code, w/o host intervence, as recent CUDA has. That is, nVidia has hardware that comply. Pity they refuse to provide proper OpenCL support then.

Maybe you should just ask Juan what he has found. I've never seen a nVidia host with such low OpenCL CPU use. His other hosts show 'lower than normal' usage as well.
Are those misprints? AstroPulse v6 tasks for computer 7037676
ID: 1443453 · Report as offensive
1 · 2 · Next

Message boards : Number crunching : GPUs: AMD vs nVidia vs The Rest of The World


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.