Why is my AMD GPU so much slower than Nvidia, despite having more cores?

Message boards : Number crunching : Why is my AMD GPU so much slower than Nvidia, despite having more cores?
Message board moderation

To post messages, you must log in.

AuthorMessage
baron_iv
Volunteer tester
Avatar

Send message
Joined: 4 Nov 02
Posts: 109
Credit: 104,905,241
RAC: 0
United States
Message 1889584 - Posted: 13 Sep 2017, 18:55:05 UTC

Does anyone know why a computer with 2 AMD R9 Fury cards (over 7100 cores of yummy goodness) performs WORSE than my slower computer with 2 Nvidia GTX 1070s? For video editing, the 2 Fury cards **destroy** the GTX 1070s, I mean there's not even a contest, the Fury cards complete complex edits and transcodes in a little less than half the time. So why doesn't that pure computational "horsepower" translate into SETI@home? Is it just because the AMD side isn't getting the same love that the NVidia gets from the programmers?

For reference, my Fury cards have 3584 cores each, and I have two. The Nvidia 1070s have 1920 cores and I also have two of those. Is CUDA just more efficient than OpenCL? I'm basically just burning power, as far as SETI goes and I'd be FAR better off if I bought a single Nvidia GTX 1080Ti, in terms of power usage and SETI results crunched.
-baron_iv
Proud member of:
GPU Users Group
ID: 1889584 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1889597 - Posted: 13 Sep 2017, 19:40:32 UTC - in response to Message 1889584.  

From looking at your systems, you are not comparing apples to apples. More like apples to oranges since the Nvidia computer is running the Linux special app and the ATI computer is running the standard SoG app. If there was a "special" linux version app for the ATI cards, they likely would be comparable.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1889597 · Report as offensive
baron_iv
Volunteer tester
Avatar

Send message
Joined: 4 Nov 02
Posts: 109
Credit: 104,905,241
RAC: 0
United States
Message 1889613 - Posted: 13 Sep 2017, 20:37:35 UTC - in response to Message 1889597.  

From looking at your systems, you are not comparing apples to apples. More like apples to oranges since the Nvidia computer is running the Linux special app and the ATI computer is running the standard SoG app. If there was a "special" linux version app for the ATI cards, they likely would be comparable.


Why isn't there a "special" app for AMD?
-baron_iv
Proud member of:
GPU Users Group
ID: 1889613 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1889615 - Posted: 13 Sep 2017, 20:41:36 UTC - in response to Message 1889613.  

Probably because Petri (the developer of the special app) doesn't have or use ATI cards. You probably should ask over in the specific thread for the special app.

Linux CUDA 'Special' App finally available, featuring Low CPU use
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1889615 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22158
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1889617 - Posted: 13 Sep 2017, 20:43:47 UTC
Last modified: 13 Sep 2017, 20:43:57 UTC

Adding to that, the nVidia+CUDA combination does appear to be much more amenable to the sort of optimisation than the OpenCL+AMD combination.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1889617 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1889700 - Posted: 14 Sep 2017, 7:06:30 UTC - in response to Message 1889584.  

Does anyone know why a computer with 2 AMD R9 Fury cards (over 7100 cores of yummy goodness) performs WORSE than my slower computer with 2 Nvidia GTX 1070s? For video editing, the 2 Fury cards **destroy** the GTX 1070s, I mean there's not even a contest, the Fury cards complete complex edits and transcodes in a little less than half the time. So why doesn't that pure computational "horsepower" translate into SETI@home?

For the same reason the Nvidia cards do so poorly with video editing- the software isn't as optimized for that particular architecture/ compute platform. Get someone that knows what they're doing to optimise the video editing software to take advantage of the GTX 1070s, and the video editing performance situation could quite easily be reversed.
Grant
Darwin NT
ID: 1889700 · Report as offensive

Message boards : Number crunching : Why is my AMD GPU so much slower than Nvidia, despite having more cores?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.