Message boards :
Number crunching :
Why is my AMD GPU so much slower than Nvidia, despite having more cores?
Message board moderation
Author | Message |
---|---|
baron_iv Send message Joined: 4 Nov 02 Posts: 109 Credit: 104,905,241 RAC: 0 |
Does anyone know why a computer with 2 AMD R9 Fury cards (over 7100 cores of yummy goodness) performs WORSE than my slower computer with 2 Nvidia GTX 1070s? For video editing, the 2 Fury cards **destroy** the GTX 1070s, I mean there's not even a contest, the Fury cards complete complex edits and transcodes in a little less than half the time. So why doesn't that pure computational "horsepower" translate into SETI@home? Is it just because the AMD side isn't getting the same love that the NVidia gets from the programmers? For reference, my Fury cards have 3584 cores each, and I have two. The Nvidia 1070s have 1920 cores and I also have two of those. Is CUDA just more efficient than OpenCL? I'm basically just burning power, as far as SETI goes and I'd be FAR better off if I bought a single Nvidia GTX 1080Ti, in terms of power usage and SETI results crunched. -baron_iv Proud member of: GPU Users Group |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13161 Credit: 1,160,866,277 RAC: 1,873 |
From looking at your systems, you are not comparing apples to apples. More like apples to oranges since the Nvidia computer is running the Linux special app and the ATI computer is running the standard SoG app. If there was a "special" linux version app for the ATI cards, they likely would be comparable. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
baron_iv Send message Joined: 4 Nov 02 Posts: 109 Credit: 104,905,241 RAC: 0 |
From looking at your systems, you are not comparing apples to apples. More like apples to oranges since the Nvidia computer is running the Linux special app and the ATI computer is running the standard SoG app. If there was a "special" linux version app for the ATI cards, they likely would be comparable. Why isn't there a "special" app for AMD? -baron_iv Proud member of: GPU Users Group |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13161 Credit: 1,160,866,277 RAC: 1,873 |
Probably because Petri (the developer of the special app) doesn't have or use ATI cards. You probably should ask over in the specific thread for the special app. Linux CUDA 'Special' App finally available, featuring Low CPU use Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
rob smith Send message Joined: 7 Mar 03 Posts: 22158 Credit: 416,307,556 RAC: 380 |
Adding to that, the nVidia+CUDA combination does appear to be much more amenable to the sort of optimisation than the OpenCL+AMD combination. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13720 Credit: 208,696,464 RAC: 304 |
Does anyone know why a computer with 2 AMD R9 Fury cards (over 7100 cores of yummy goodness) performs WORSE than my slower computer with 2 Nvidia GTX 1070s? For video editing, the 2 Fury cards **destroy** the GTX 1070s, I mean there's not even a contest, the Fury cards complete complex edits and transcodes in a little less than half the time. So why doesn't that pure computational "horsepower" translate into SETI@home? For the same reason the Nvidia cards do so poorly with video editing- the software isn't as optimized for that particular architecture/ compute platform. Get someone that knows what they're doing to optimise the video editing software to take advantage of the GTX 1070s, and the video editing performance situation could quite easily be reversed. Grant Darwin NT |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.