GPU FLOPS: Theory vs Reality

Message boards : Number crunching : GPU FLOPS: Theory vs Reality
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 7 · 8 · 9 · 10 · 11 · 12 · 13 . . . 20 · Next

AuthorMessage
Profile -= Vyper =-
Volunteer tester
Avatar

Send message
Joined: 5 Sep 99
Posts: 1652
Credit: 1,065,191,981
RAC: 2,537
Sweden
Message 1967303 - Posted: 26 Nov 2018, 21:56:22 UTC - in response to Message 1967250.  


D) Or others GPU's like the 750Ti, etc. are clear winners in the cost x production equation but them not runs with the latest CUDA10 builds, so IMHO i prefer not to invest on them in a new system.


I agree about not planning on using gtx 750Ti's on a new system. However, a 750Ti will run on Linux/CUDA91. So if you already have them and unused slots :)

Tom


Actually one of my hosts is using a Cuda10 driver and the Cuda10 app.. :) on 4 x 750TIs
https://setiathome.berkeley.edu/show_host_detail.php?hostid=8570185

_________________________________________________________________________
Addicted to SETI crunching!
Founder of GPU Users Group
ID: 1967303 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65875
Credit: 55,293,173
RAC: 49
United States
Message 1967307 - Posted: 26 Nov 2018, 23:33:28 UTC - in response to Message 1966781.  

I have been waiting for a new scan, with data for the RTX 20xx cards. Are we getting there?


2070 = 1080ti

2080 = even faster

2080ti = even fasterer

:)

IMO the 2070 is the best bang for the buck. 2080 if you want to splurge a bit. 2080ti too expensive for what increase you get.


2080 = smokes, see 2080Ti

2080Ti = Keep a Fire Extinguisher handy
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1967307 · Report as offensive
Oddbjornik Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 15 May 99
Posts: 220
Credit: 349,610,548
RAC: 1,728
Norway
Message 1967349 - Posted: 27 Nov 2018, 5:32:12 UTC - in response to Message 1967303.  

Actually one of my hosts is using a Cuda10 driver and the Cuda10 app.. :) on 4 x 750TIs
https://setiathome.berkeley.edu/show_host_detail.php?hostid=8570185
But the 750TI has compute capability 5.0, so how does it run Cuda 10?
Is the software backward compatible, so I can put the Cuda 10 software on my GTX 680 system (while the RTX 2070 is still in the mail) and have it chug along just like that?
Or, perhaps more likely, is there something here that I totally don't understand?
ID: 1967349 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 35233
Credit: 261,360,520
RAC: 489
Australia
Message 1967352 - Posted: 27 Nov 2018, 6:22:12 UTC - in response to Message 1967349.  

Actually one of my hosts is using a Cuda10 driver and the Cuda10 app.. :) on 4 x 750TIs
https://setiathome.berkeley.edu/show_host_detail.php?hostid=8570185
But the 750TI has compute capability 5.0, so how does it run Cuda 10?
Is the software backward compatible, so I can put the Cuda 10 software on my GTX 680 system (while the RTX 2070 is still in the mail) and have it chug along just like that?
Or, perhaps more likely, is there something here that I totally don't understand?
The 680 is an older Kepler based product where as the 750 is a newer Maxwell based product so I doubt it work be a good choice.

Cheers.
ID: 1967352 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22290
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1967354 - Posted: 27 Nov 2018, 6:26:06 UTC

Provided the underlying hardware of the GPU is supported by Cuda 10 then it will run. In terms of computation there is probably no advantage in running Cuda 10 just now as nobody has released optimised the applications yet (but they are working on them)
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1967354 · Report as offensive
Oddbjornik Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 15 May 99
Posts: 220
Credit: 349,610,548
RAC: 1,728
Norway
Message 1967356 - Posted: 27 Nov 2018, 6:51:02 UTC - in response to Message 1967354.  

Provided the underlying hardware of the GPU is supported by Cuda 10 then it will run. In terms of computation there is probably no advantage in running Cuda 10 just now as nobody has released optimised the applications yet (but they are working on them)
A-ha, so when it says on this page that "CUDA SDK 10.0 [has] support for compute capability 3.0 – 7.5 (Kepler, Maxwell, Pascal, Volta, Turing)", that means the 680, which has CC 3.0, has a fair chance of working with Cuda 10 software, even though it probably won't be very efficient (?).
I have been confusing the version levels of the compute capability (most recent version is 7.5 with the Turing cards) and the SDK (10.0 is the latest and greatest).
ID: 1967356 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1967361 - Posted: 27 Nov 2018, 7:39:09 UTC - in response to Message 1967356.  
Last modified: 27 Nov 2018, 7:52:05 UTC

Provided the underlying hardware of the GPU is supported by Cuda 10 then it will run. In terms of computation there is probably no advantage in running Cuda 10 just now as nobody has released optimised the applications yet (but they are working on them)
A-ha, so when it says on this page that "CUDA SDK 10.0 [has] support for compute capability 3.0 – 7.5 (Kepler, Maxwell, Pascal, Volta, Turing)", that means the 680, which has CC 3.0, has a fair chance of working with Cuda 10 software, even though it probably won't be very efficient (?).
I have been confusing the version levels of the compute capability (most recent version is 7.5 with the Turing cards) and the SDK (10.0 is the latest and greatest).

I'm not sure what we are talking about. If we are talking about the Linux "special apps", then yes they do in fact have compute capability limits. Depends on how the apps were compiled on what platfrom and compiler. This is from the CA post explaining the platform requirements.

Check the list of supported GPUs here, https://en.wikipedia.org/wiki/CUDA#GPUs_supported
The CUDA 6.0 App requires at least CC=3.5
The CUDA 9.0 Apps requires at least CC=5.0
The CUDA 9.2 App requires at least CC=6.1

Compiled & Tested in Ubuntu. Read the README_x41p_xxxx.txt file in docs for best use, the CUDA Libraries are included.
The CUDA 6 & 9 Apps will run in Ubuntu 14.04.1 and higher. The CUDA 9.2 App requires Ubuntu 16.04.

The CUDA 6.0 Special App is for the older Kepler CC 3.5 GPUs that might not work well with CUDA 7.5 and above. The CUDA 9.0 App is for most normal systems and is tuned to also run on Maxwell GPUs. Place the expanded files in the setiathome.berkeley.edu folder, and set file permissions if using the Repository version of BOINC.

The GTX 750Ti is a CC 5.0 card and will run on the CUDA 9.0 app. Not sure how it is running on the CUDA 10.0 application. There are two versions of the CUDA 10.0 app out there I believe, the original and maybe a reworked one by Petri and then CA one by TBar.

[Edit] So went back to the original TBar post for the CUDA10 app and see he has compiled it to work with 750Ti because he is using sm_75 code;
It has sm_75 code, so it's a little bigger. It will work with the 750Ti & higher in 14.04.1 and higher.

So that is how the GTX 750Ti is working with the CUDA10 driver and app. So the original post from CA back in the summer is a little outdated.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1967361 · Report as offensive
Oddbjornik Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 15 May 99
Posts: 220
Credit: 349,610,548
RAC: 1,728
Norway
Message 1967396 - Posted: 27 Nov 2018, 15:48:38 UTC - in response to Message 1967361.  

[Edit] So went back to the original TBar post for the CUDA10 app and see he has compiled it to work with 750Ti because he is using sm_75 code;
It has sm_75 code, so it's a little bigger. It will work with the 750Ti & higher in 14.04.1 and higher.

So that is how the GTX 750Ti is working with the CUDA10 driver and app. So the original post from CA back in the summer is a little outdated.

And that was the last piece of information I needed in order to understand how it all connects. Thank you, Keith.
This means I'll wait for the RTX 2070 to arrive before I proceed.
ID: 1967396 · Report as offensive
Profile Shaggie76
Avatar

Send message
Joined: 9 Oct 09
Posts: 282
Credit: 271,858,118
RAC: 196
Canada
Message 1969591 - Posted: 9 Dec 2018, 19:22:58 UTC

I'll try to remember to start a scan before I retire for the night -- it ties up my laptop for a few hours. There might be enough 2080's to qualify by now.
ID: 1969591 · Report as offensive
Profile Shaggie76
Avatar

Send message
Joined: 9 Oct 09
Posts: 282
Credit: 271,858,118
RAC: 196
Canada
Message 1969656 - Posted: 9 Dec 2018, 22:16:41 UTC

Or not -- the servers keep timing out and I can even finish downloading the host database. I'll try to remember in a few days to check again.
ID: 1969656 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1969662 - Posted: 9 Dec 2018, 22:56:11 UTC - in response to Message 1969656.  

Or not -- the servers keep timing out and I can even finish downloading the host database. I'll try to remember in a few days to check again.

Will be appreciated Shaggie whenever the servers allow it. My 2080 won't show up though because it is running the special app and not OpenCL.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1969662 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1969664 - Posted: 9 Dec 2018, 23:22:45 UTC

Is possible to make the same table but instead of the ones who use OpenCL builds shows the ones who run the special CUDA builds?
ID: 1969664 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1969668 - Posted: 9 Dec 2018, 23:47:33 UTC - in response to Message 1969664.  

Is possible to make the same table but instead of the ones who use OpenCL builds shows the ones who run the special CUDA builds?

I believe Shaggie answered that question once and said it is not possible.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1969668 · Report as offensive
Profile lunkerlander
Avatar

Send message
Joined: 23 Jul 18
Posts: 82
Credit: 1,353,232
RAC: 4
United States
Message 1969702 - Posted: 10 Dec 2018, 2:15:38 UTC - in response to Message 1969664.  

Is possible to make the same table but instead of the ones who use OpenCL builds shows the ones who run the special CUDA builds?


If not, you can probably infer how Nvidia GPUs' performance compares to one another. I also like to click on the computers tab from some of the top SETI hosts and look at individual tasks performed from their PC. You can get a good idea how many seconds tasks take to complete with various GPUs by doing this.
ID: 1969702 · Report as offensive
Profile Shaggie76
Avatar

Send message
Joined: 9 Oct 09
Posts: 282
Credit: 271,858,118
RAC: 196
Canada
Message 1970062 - Posted: 13 Dec 2018, 3:00:55 UTC

I finished aggregating the stats but imgur is having technical issues right now and I can't upload it tonight. As you would expect the RTX 2080 Ti steals the performance crown and the 2070 sets a new record for the performance/watt. I'll try to post tomorrow.
ID: 1970062 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1970063 - Posted: 13 Dec 2018, 3:08:32 UTC - in response to Message 1970062.  

Whenever you can get it posted, it will be most appreciated by everyone writing out their XMAS shopping lists.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1970063 · Report as offensive
Profile -= Vyper =-
Volunteer tester
Avatar

Send message
Joined: 5 Sep 99
Posts: 1652
Credit: 1,065,191,981
RAC: 2,537
Sweden
Message 1970114 - Posted: 13 Dec 2018, 8:53:03 UTC - in response to Message 1970062.  

If u take "My host" into the equation then dont forget that my throughput is lower due to 185W set instead of 260-265W as it is per default.
It's also approx 2-3 seconds slower than at default.

Thanks for your superb list Shaggie

_________________________________________________________________________
Addicted to SETI crunching!
Founder of GPU Users Group
ID: 1970114 · Report as offensive
Profile Shaggie76
Avatar

Send message
Joined: 9 Oct 09
Posts: 282
Credit: 271,858,118
RAC: 196
Canada
Message 1970134 - Posted: 13 Dec 2018, 13:39:33 UTC

At last I have data:



I'm sorry that the image isn't as sharp as I'd like -- I've transition to a high-DPI laptop and when I convert the graphs to images Excel does weird things. I really should find a better way to finish off the data -- maybe some Perl GD module would be more consistent.

For reference here's the number of hosts and tasks analyzed for the top few cards -- there aren't a lot of 20x0 cards in play yet but they've done enough work that the I feel comfortable with the results .

ID: 1970134 · Report as offensive
Profile Bill Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 30 Nov 05
Posts: 282
Credit: 6,916,194
RAC: 60
United States
Message 1970155 - Posted: 13 Dec 2018, 17:24:14 UTC - in response to Message 1970134.  

Wow, the 2070 jumps off the page! This is the first time I have read this post, so please forgive me if I ask questions that have been asked before:

1. I feel like there is a lot of background with how you compile this data. For example, is the credit/hour and credit/watt-hour calculated all-time, or within a timeframe? Do you have a running list of notes somewhere?
2. I think I browsed one of your earlier posts, and a different NVIDIA card (the 970)? at one point had a higher credit/watt-hour rating. I'm curious what changed for that.
3. Do we know what CPU was used in tandem with the GPU credits? I know the CPU provides a minor role in the crunching of the WU, but I wonder if there is a significant difference between one CPU and another.

This is neat to see these types of statistics. I am building a rig right now, and since it is my first build I am starting small with no GPUs. I saw some older GPUs for sale online under $50 (like a GeForce GTX 650). Obviously I would still need to check compatibility with any Boinc projects I want to crunch, but it seemed like a way to increase the number of GPU WUs to crunch on the cheap. I digress, that is probably a conversation for another post.
Seti@home classic: 1,456 results, 1.613 years CPU time
ID: 1970155 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1970170 - Posted: 13 Dec 2018, 19:36:47 UTC - in response to Message 1970155.  


This is neat to see these types of statistics. I am building a rig right now, and since it is my first build I am starting small with no GPUs. I saw some older GPUs for sale online under $50 (like a GeForce GTX 650). Obviously I would still need to check compatibility with any Boinc projects I want to crunch, but it seemed like a way to increase the number of GPU WUs to crunch on the cheap. I digress, that is probably a conversation for another post.



If you can find a gtx 750Ti for maybe $70 you will be happier than with the Gtx 650. (at least I think so. I am happy enough I have 3 of them :)

Tom
A proud member of the OFA (Old Farts Association).
ID: 1970170 · Report as offensive
Previous · 1 . . . 7 · 8 · 9 · 10 · 11 · 12 · 13 . . . 20 · Next

Message boards : Number crunching : GPU FLOPS: Theory vs Reality


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.