GPUs: AMD vs nVidia vs The Rest of The World

Message boards : Number crunching : GPUs: AMD vs nVidia vs The Rest of The World
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1443457 - Posted: 17 Nov 2013, 19:21:53 UTC - in response to Message 1443453.  

Actually nVidia has even no proper OpenCL 1.0 support.
Very nasty bug (feature?) was discovered recently in attempt to reduce CPU usage of OpenCL NV AP app: asynchronous buffer reads actually done as synchronous ones.
Bug was filed via nVidia CUDA registered developer program.
I got response with request for test case and thorough explanation of what is buggy behavior in this case. Explanations and test case were provided more than week ago - no signs of progress from that time.

EDIT: BTW, OpenCL 2.0 standard will have similar abilities for launching kernels right from GPU code, w/o host intervence, as recent CUDA has. That is, nVidia has hardware that comply. Pity they refuse to provide proper OpenCL support then.

Maybe you should just ask Juan what he has found. I've never seen a nVidia host with such low OpenCL CPU use. His other hosts show 'lower than normal' usage as well.
Are those misprints? AstroPulse v6 tasks for computer 7037676

He's pulled across "rev 2058" from Beta testing - that contains the application code which Raistmer was able to copy across from Einstein's sources.
ID: 1443457 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1443466 - Posted: 17 Nov 2013, 19:31:01 UTC - in response to Message 1443457.  
Last modified: 17 Nov 2013, 19:33:35 UTC

Actually nVidia has even no proper OpenCL 1.0 support.
Very nasty bug (feature?) was discovered recently in attempt to reduce CPU usage of OpenCL NV AP app: asynchronous buffer reads actually done as synchronous ones.
Bug was filed via nVidia CUDA registered developer program.
I got response with request for test case and thorough explanation of what is buggy behavior in this case. Explanations and test case were provided more than week ago - no signs of progress from that time.

EDIT: BTW, OpenCL 2.0 standard will have similar abilities for launching kernels right from GPU code, w/o host intervence, as recent CUDA has. That is, nVidia has hardware that comply. Pity they refuse to provide proper OpenCL support then.

Maybe you should just ask Juan what he has found. I've never seen a nVidia host with such low OpenCL CPU use. His other hosts show 'lower than normal' usage as well.
Are those misprints? AstroPulse v6 tasks for computer 7037676

He's pulled across "rev 2058" from Beta testing - that contains the application code which Raistmer was able to copy across from Einstein's sources.

Looks pretty impressive. It makes those blanking blanked tasks really stand out...
ID: 1443466 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1443490 - Posted: 17 Nov 2013, 20:50:58 UTC - in response to Message 1443457.  
Last modified: 17 Nov 2013, 20:58:02 UTC

Actually nVidia has even no proper OpenCL 1.0 support.
Very nasty bug (feature?) was discovered recently in attempt to reduce CPU usage of OpenCL NV AP app: asynchronous buffer reads actually done as synchronous ones.
Bug was filed via nVidia CUDA registered developer program.
I got response with request for test case and thorough explanation of what is buggy behavior in this case. Explanations and test case were provided more than week ago - no signs of progress from that time.

EDIT: BTW, OpenCL 2.0 standard will have similar abilities for launching kernels right from GPU code, w/o host intervence, as recent CUDA has. That is, nVidia has hardware that comply. Pity they refuse to provide proper OpenCL support then.

Maybe you should just ask Juan what he has found. I've never seen a nVidia host with such low OpenCL CPU use. His other hosts show 'lower than normal' usage as well.
Are those misprints? AstroPulse v6 tasks for computer 7037676

He's pulled across "rev 2058" from Beta testing - that contains the application code which Raistmer was able to copy across from Einstein's sources.

All my hosts are running the rev 2058 from Beta, it has a very low CPU usage (good for me, most of my CPU´s are I5) that allow me to run 2 AP WU at a time on the multiple GPU hosts. But be aware, there still have a bug on this version but that bug is very rare and apears randomly (hard to track). And has a wierd GPU usage on my I7 host.
ID: 1443490 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1443601 - Posted: 18 Nov 2013, 3:01:21 UTC - in response to Message 1443457.  
Last modified: 18 Nov 2013, 3:35:29 UTC

Actually nVidia has even no proper OpenCL 1.0 support.
Very nasty bug (feature?) was discovered recently in attempt to reduce CPU usage of OpenCL NV AP app: asynchronous buffer reads actually done as synchronous ones.
Bug was filed via nVidia CUDA registered developer program.
I got response with request for test case and thorough explanation of what is buggy behavior in this case. Explanations and test case were provided more than week ago - no signs of progress from that time.

EDIT: BTW, OpenCL 2.0 standard will have similar abilities for launching kernels right from GPU code, w/o host intervence, as recent CUDA has. That is, nVidia has hardware that comply. Pity they refuse to provide proper OpenCL support then.

Maybe you should just ask Juan what he has found. I've never seen a nVidia host with such low OpenCL CPU use. His other hosts show 'lower than normal' usage as well.
Are those misprints? AstroPulse v6 tasks for computer 7037676

He's pulled across "rev 2058" from Beta testing - that contains the application code which Raistmer was able to copy across from Einstein's sources.


Sleep() & wait for event loops will be used in some places


That host uses -use_sleep option that never worked in prev builds just because bug I mentioned earlier. And sorry, Richard, this has nothing to do with Einstein's sources though it was close in time with Intel GPU findings that followed from those.

EDIT:thanks to Einstein@home project we now have further improved oclFFT library that works much more precise on IntelGPUs than original one (and whole idea of too bad native_sin/cos precision on Intel GPU is inspired by conversations with Oliver) also, comparison between SETI and Einstein's sourses reveal some difference in synchronisation usage (namely, they use direct clFinish() call instead of other possible methods). Extention of this approach to quite ridiculous degree (to call clFinish after EACH OpenCL call) allowed to decrease CPU usage on IntelGPUs considerably (need to note that Einstein's code doesn't use clFinish in that way, but looks like very such (and definitely not the one I could call "normal") way of usage provides best results on Intel GPUs).
On nVidia GPUs such excessive clFinish calls do nothing but increase total runtime and CPU overhead (as one could expect from general considerations).
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1443601 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20283
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1443992 - Posted: 19 Nov 2013, 4:55:59 UTC - in response to Message 1443297.  

...Perhaps a more everyday readable summary is eloquantly given in this comments post about the nVidia approach:

...Oddly enough, the proposed exploitees don't care much for this approach.


I wouldn't be surprised if nVidia get the finger from a few more people other than Linus...


Is this why nVidia are emboldened or driven to try throwing their weight around to attempt to dictate terms?...

Top500: Red dragon ... graphical power

(See the accelerators graph. Note the recent trend...)


IT is what we allow it to be...
Martin

See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1443992 · Report as offensive
Profile William
Volunteer tester
Avatar

Send message
Joined: 14 Feb 13
Posts: 2037
Credit: 17,689,662
RAC: 0
Message 1444052 - Posted: 19 Nov 2013, 10:55:02 UTC

Please keep the discussion civil and technical or it's going to end up in Politics.

There IS a difference between opinion and slander.

Competition is one thing, and most of us are biased one way or the other for all sorts of reasons.

I don't like the undertone of some of the posts. If you want to be nasty, you can do so in Politics.
A person who won't read has no advantage over one who can't read. (Mark Twain)
ID: 1444052 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20283
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1444566 - Posted: 20 Nov 2013, 17:10:46 UTC - in response to Message 1443398.  
Last modified: 20 Nov 2013, 17:11:24 UTC

Hahaha, yeah I can see how it might be tough for Intel and AMD to come to terms with that they both build inferior compilers to the open source LLVM project, and want nVidia's help while having shoved them out of the x86 market completely.

There seems to be too much of that game between the main players that helps noone. You could argue that the latest little game is just another in-kind (negative) step as is always done between such manufacturers. However, in this instance, it is for rather a high profile and widely used and important target...

(Thanks for good comment in the various other posts.)


For my own personal view:

nVidia nicely gained a good lead some time ago with their bold move into what at that time was a whole new GPGPU architecture that would work well enough for both industrial and consumer use. There's various graphs showing how that has been very successful although more recently the competition is starting to gain more market share.

I've been nVidia since the early days of their CUDA, all to good gain.

Looking around for alternatives, the AMD APUs look good and interesting for their homogenous memory access, but all at a small scale for onboard integrated graphics.

I'm guessing such as the Intel Phi, and the Epiphany/Parallela systems are far too new/niche for the time being.

Which leaves the AMD (ATI?) Radeon GPUs as a comparable alternative?...



For the moment, I'm waiting to see which way things go for the latest controversy. (Shame we seem to always create in effect monopolies!)


Happy clean fast crunchin',
Martin

(All just humble personal opinion as ever and always.)
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1444566 · Report as offensive
Previous · 1 · 2

Message boards : Number crunching : GPUs: AMD vs nVidia vs The Rest of The World


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.