Best bang -for-buck GPU 2018?

Message boards : Number crunching : Best bang -for-buck GPU 2018?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · Next

AuthorMessage
Profile petri33
Volunteer tester

Send message
Joined: 6 Jun 02
Posts: 1668
Credit: 623,086,772
RAC: 156
Finland
Message 1929044 - Posted: 9 Apr 2018, 16:16:46 UTC - in response to Message 1929043.  

I am working on getting you that 1050ti information. What I need to know is the query in smi for the current pwr usage?

nvidia-smi -l
To overcome Heisenbergs:
"You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones
ID: 1929044 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1929045 - Posted: 9 Apr 2018, 16:21:09 UTC - in response to Message 1929034.  
Last modified: 9 Apr 2018, 16:27:16 UTC

Now we need just 1050 & 1050Ti to pick the winner.

You're ignoring the 750 Ti which only draws 30 watts.
This machine is running a 30 watt 750 Ti, https://setiathome.berkeley.edu/results.php?hostid=7769537&offset=100
Around 400 seconds, at 30 watts?
This machine is running the plain 1050, which isn't much different than a Ti, https://setiathome.berkeley.edu/results.php?hostid=6906726&offset=200
The current drivers don't show the watts, but previous ones put it around 65-70 watts being the Super Super-Clocked version. My 750 Ti isn't super clocked, it isn't even OCed.
750 Ti at 400 secs and 30 watts
or;
1050 at 290 secs and 67 watts?

Looks good for the 750 Ti.

Oh, this machine is running the 1050 Ti SC, and the times are about the same as the 1050 SSC, https://setiathome.berkeley.edu/result.php?resultid=6551333385

BTW, in my test CUDA 9.1 slows a GTX 950 by about a minute over CUDA 9.0. CUDA 9 is much faster on a 950 than CUDA 9.1. I tried Multiple builds, same results.
ID: 1929045 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1929052 - Posted: 9 Apr 2018, 16:46:32 UTC - in response to Message 1929045.  
Last modified: 9 Apr 2018, 17:09:50 UTC

You're ignoring the 750 Ti which only draws 30 watts.

My idea was to test only Linux boxes with multiple GPU's, who runs CUDA90 & +, those who not shows on the Shaggie76's GPU chart. The 750Ti is the clear winner on his chart.

This machine is running a 30 watt 750 Ti, https://setiathome.berkeley.edu/results.php?hostid=7769537&offset=100
Around 400 seconds, at 30 watts?

Unless i read something wrong this host uses CUDA7.5

This machine is running the plain 1050, which isn't much different than a Ti, https://setiathome.berkeley.edu/results.php?hostid=6906726&offset=200

Data added to the table. Thanks.


Oh, this machine is running the 1050 Ti SC, and the times are about the same as the 1050 SSC, https://setiathome.berkeley.edu/result.php?resultid=6551333385

I not add to the table because not know the Power it uses & the times are similar to the 1050 as explained.

BTW, in my test CUDA 9.1 slows a GTX 950 by about a minute over CUDA 9.0. CUDA 9 is much faster on a 950 than CUDA 9.1. I tried Multiple builds, same results.

That is nice to know. As you can see by the table the crunching time of Petri (who runs CUDA9.1) on similar GPU's are faster compared with Keith (who runs CUDA9.0). Unfourtunately i not have the CUDA9.1 apps to test in my hosts to see if that replicate on the 1070. If anyone has that info will be nice to know.

So adding Tbar 1050 the table looks like:

Host--GPU-Crunching Time-Power Draw
TBar--1050--270 secs---------67W (CUDA 9.0)
Keith-1060 --165 secs---------88W (CUDA9.0)
Juan--1060--176 secs---------86W (CUDA9.0)
Juan--1070--127 secs---------117W (CUDA9.0)
Keith-1080--100 secs---------135W (CUDA9.0)
Petri--1080---90 secs----------150W (CUDA9.1)
Keith-1080Ti--72 secs---------208W (CUDA9.0)
Petri--1080Ti--68secs---------220W (CUDA9.1)
Petri--TitanV---39 secs---------140W (CUDA9.1)
ID: 1929052 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1929055 - Posted: 9 Apr 2018, 17:14:27 UTC - in response to Message 1929052.  

As I've stated previously, the CUDA versions make little difference, unless you start going back to before 7.5. CUDA 7.0 is a wash, much slower than even CUDA 6.0. What makes the difference is Petri's tweaking, or App versions. Newer App Versions usually run faster, unfortunately anything newer than zi3v will result in Many Invalid results....but it is a little faster. Just look at this machine running zi3xs2, it's loaded with Invalids whereas my machine running zi3v has None, https://setiathome.berkeley.edu/results.php?hostid=8424399

I can assure you, My 750 Ti runs the same with CUDA 7.5 or CUDA 9.0, as there is little difference between 7.5, 8.0, or 9.0. I should know, I compiled the Apps.
ID: 1929055 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1929056 - Posted: 9 Apr 2018, 17:25:09 UTC - in response to Message 1929055.  
Last modified: 9 Apr 2018, 17:26:15 UTC

I can assure you, My 750 Ti runs the same with CUDA 7.5 or CUDA 9.0, as there is little difference between 7.5, 8.0, or 9.0. I should know, I compiled the Apps.

Ok. So adding Tbar 750Ti the table looks like:

Host--GPU-Crunching Time-Power Draw
TBar--750Ti-400 secs---------30W (CUDA7.5)
TBar--1050--270 secs---------67W (CUDA 9.0)
Keith-1060 --165 secs---------88W (CUDA9.0)
Juan--1060--176 secs---------86W (CUDA9.0)
Juan--1070--127 secs---------117W (CUDA9.0)
Keith-1080--100 secs---------135W (CUDA9.0)
Petri--1080---90 secs----------150W (CUDA9.1)
Keith-1080Ti--72 secs---------208W (CUDA9.0)
Petri--1080Ti--68secs---------220W (CUDA9.1)
Petri--TitanV---39 secs---------140W (CUDA9.1)
ID: 1929056 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22508
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1929058 - Posted: 9 Apr 2018, 17:37:31 UTC

Makes even more interesting reading if one looks at total energy consumption:
TBar--750Ti-400 secs---------30W (CUDA7.5) = 12000
TBar--1050--270 secs---------67W (CUDA 9.0) = 18090
Keith-1060 --165 secs---------88W (CUDA9.0) = 14520
Juan--1060--176 secs---------86W (CUDA9.0) = 15136
Juan--1070--127 secs---------117W (CUDA9.0) = 14859
Keith-1080--100 secs---------135W (CUDA9.0) = 13500
Petri--1080---90 secs----------150W (CUDA9.1) = 13500
Keith-1080Ti--72 secs---------208W (CUDA9.0) = 14976
Petri--1080Ti--68secs---------220W (CUDA9.1) = 14960
Petri--TitanV---39 secs---------140W (CUDA9.1) = 5460

(Calculated energy in Watt Seconds = run time * quoted power)
The clear winner is the Titan V, second the venerable & venerated GTX750Ti, the "looser" the GTX1050, and the rest pretty well bunch together.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1929058 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1929059 - Posted: 9 Apr 2018, 17:45:39 UTC - in response to Message 1929058.  
Last modified: 9 Apr 2018, 17:45:54 UTC

Makes even more interesting reading if one looks at total energy consumption:

Sure. Thanks Rob.
Could be even more interesting if we add the cost of the GPU itself.
Sure the 750Ti will win by a large margin. LOL
ID: 1929059 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1929060 - Posted: 9 Apr 2018, 17:53:30 UTC - in response to Message 1929056.  
Last modified: 9 Apr 2018, 17:56:31 UTC

BTW, here's a 750 Ti running CUDA 9.0 but using a test version of zi3x. Note his is just slightly faster than zi3v, but he has quite a Few Invalids, https://setiathome.berkeley.edu/results.php?hostid=8053171&offset=400
zi3x-32 was a test to see what would work on the Kepler cards, you really won't notice much difference running zi3v. Except, you won't have any Invalids, and the Inconclusives would be lower.

Also, Petri is Not running zi3v. That is a Major difference.
ID: 1929060 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1929063 - Posted: 9 Apr 2018, 18:08:44 UTC - in response to Message 1929058.  


(Calculated energy in Watt Seconds = run time * quoted power)
The clear winner is the Titan V, second the venerable & venerated GTX750Ti, the "looser" the GTX1050, and the rest pretty well bunch together.

I'll have to go back to the technical breakdown of the TitanV to see what feature size and architecture changes make it so economical in power consumption.

It certainly isn't going to win the cost per watt contest but it might bode good things in the future for the consumer level cards of the TitanV architecture.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1929063 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22508
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1929064 - Posted: 9 Apr 2018, 18:12:09 UTC

Petri running his pre-pre-pre release special make the comparison between the two 1080Ti "interesting", as it gives some idea of the difference between his latest (potential) offering and the current top hitter (zi3v) - about 4% improvement in run time for about +6% on power (or a very slight reduction in energy consumed)
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1929064 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 1929090 - Posted: 9 Apr 2018, 21:02:42 UTC

performance per watt is certainly a factor that is intriguing to some, but i think we should steer this thread back to it's intended performance per dollar GPU price metric.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 1929090 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1929097 - Posted: 9 Apr 2018, 21:31:00 UTC - in response to Message 1929090.  

Agree, the original intent of the post should be cost vs performance. That said, one last comment on Titan V's Volta architecture. It is only based on 12nm feature size which is just a slight process improvement over Pascal's 16nm feature size. From the power improvements posted, I thought Volta might be based on 10nm feature size. So, the design probably has more to do with the small power consumption over so many CUDA cores. So, the new king in cost versus performance should be the Volta version of the 1050Ti or 1060.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1929097 · Report as offensive
Profile Bill G Special Project $75 donor
Avatar

Send message
Joined: 1 Jun 01
Posts: 1282
Credit: 187,688,550
RAC: 182
United States
Message 1929109 - Posted: 9 Apr 2018, 22:41:29 UTC - in response to Message 1929052.  


My idea was to test only Linux boxes with multiple GPU's, who runs CUDA90 & +, those who not shows on the Shaggie76's GPU chart. The 750Ti is the clear winner on his chart.

OK, my smi version does not give current wattage, only max. I have 2X1050ti at 75 watts each and one 1050ti at 130 watts.
And I am running W10.

SETI@home classic workunits 4,019
SETI@home classic CPU time 34,348 hours
ID: 1929109 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1929113 - Posted: 9 Apr 2018, 23:08:37 UTC - in response to Message 1929090.  

performance per watt is certainly a factor that is intriguing to some, but i think we should steer this thread back to it's intended performance per dollar GPU price metric.


hahaha....

Yeah, we do love any excuse to get together and jaw about processing. I'd consider it a compliment, lol....
ID: 1929113 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 36639
Credit: 261,360,520
RAC: 489
Australia
Message 1929117 - Posted: 9 Apr 2018, 23:32:26 UTC - in response to Message 1929109.  


My idea was to test only Linux boxes with multiple GPU's, who runs CUDA90 & +, those who not shows on the Shaggie76's GPU chart. The 750Ti is the clear winner on his chart.

OK, my smi version does not give current wattage, only max. I have 2X1050ti at 75 watts each and one 1050ti at 130 watts.
And I am running W10.

GPUz will give you a better idea on what they're actually using. ;-)

Cheers.
ID: 1929117 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1929125 - Posted: 10 Apr 2018, 0:12:37 UTC - in response to Message 1929090.  

performance per watt is certainly a factor that is intriguing to some, but i think we should steer this thread back to it's intended performance per dollar GPU price metric.

The problem to do that is the "madness" prices of the GPU market.
That's why i only put the time, watts, etc.
Each one could do his math from his own suppliers.
ID: 1929125 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1929129 - Posted: 10 Apr 2018, 0:30:02 UTC - in response to Message 1929109.  


My idea was to test only Linux boxes with multiple GPU's, who runs CUDA90 & +, those who not shows on the Shaggie76's GPU chart. The 750Ti is the clear winner on his chart.

OK, my smi version does not give current wattage, only max. I have 2X1050ti at 75 watts each and one 1050ti at 130 watts.
And I am running W10.

The nvidia-smi version is updated for each driver version. Your driver version 388 should report the current power draw for each card along with the max TDP. It is has worked that way for me in Win 10 as far back as the 375 drivers as far as I can remember.

+-----------------------------------------------------------------------------+
Mon Apr 09 17:21:23 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 391.24 Driver Version: 391.24 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1070 WDDM | 00000000:04:00.0 Off | N/A |
| 96% 57C P2 111W / 169W | 2323MiB / 8192MiB | 91% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 1070 WDDM | 00000000:09:00.0 On | N/A |
| 98% 61C P2 97W / 169W | 2408MiB / 8192MiB | 87% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce GTX 108... WDDM | 00000000:0A:00.0 Off | N/A |
| 0% 38C P2 167W / 300W | 3183MiB / 11264MiB | 89% Default |
+-------------------------------+----------------------+----------------------+

But GPU-Z only reports the current power usage in percentage of total TDP. So you would have to know the max TDP of the 750TI and 1050Ti to deduce the current actual power draw in watts.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1929129 · Report as offensive
Profile Bill G Special Project $75 donor
Avatar

Send message
Joined: 1 Jun 01
Posts: 1282
Credit: 187,688,550
RAC: 182
United States
Message 1929139 - Posted: 10 Apr 2018, 1:27:44 UTC - in response to Message 1929129.  
Last modified: 10 Apr 2018, 1:41:50 UTC


My idea was to test only Linux boxes with multiple GPU's, who runs CUDA90 & +, those who not shows on the Shaggie76's GPU chart. The 750Ti is the clear winner on his chart.

OK, my smi version does not give current wattage, only max. I have 2X1050ti at 75 watts each and one 1050ti at 130 watts.
And I am running W10.

The nvidia-smi version is updated for each driver version. Your driver version 388 should report the current power draw for each card along with the max TDP. It is has worked that way for me in Win 10 as far back as the 375 drivers as far as I can remember..




https://www.dropbox.com/s/i5gsysrs9dmk4fd/smi.jpg?dl=0

SETI@home classic workunits 4,019
SETI@home classic CPU time 34,348 hours
ID: 1929139 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1929142 - Posted: 10 Apr 2018, 1:32:18 UTC - in response to Message 1929139.  

[quote]
My idea was to test only Linux boxes with multiple GPU's, who runs CUDA90 & +, those who not shows on the Shaggie76's GPU chart. The 750Ti is the clear winner on his chart.

OK, my smi version does not give current wattage, only max. I have 2X1050ti at 75 watts each and one 1050ti at 130 watts.
And I am running W10.

The nvidia-smi version is updated for each driver version. Your driver version 388 should report the current power draw for each card along with the max TDP. It is has worked that way for me in Win 10 as far back as the 375 drivers as far as I can remember..



Huh?? Never seen that before. Must be looking for some other support in the Windows environment that I have but you don't. Sorry about that. Guess you will have to rely on GPU-Z and its percentage loading numbers to calculate actual watts consumed.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1929142 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1929143 - Posted: 10 Apr 2018, 1:32:46 UTC

After thinking about it, I think the 65 to 70 watts is for the GTX 950 SC. The GTX 1050 SSC doesn't show the usage with the recent drivers, I had to go all the way back to the CUDA ToolKit 9 driver.
But....it works;
Mon Apr  9 21:21:25 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.81                 Driver Version: 384.81                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1050    Off  | 00000000:01:00.0  On |                  N/A |
| 38%   61C    P0    59W /  75W |   1704MiB /  1998MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 1050    Off  | 00000000:08:00.0 Off |                  N/A |
| 35%   59C    P0    60W /  75W |   1500MiB /  1999MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0       772      G   /usr/bin/X                                   169MiB |
|    0      1144      G   compiz                                        40MiB |
|    0      2888      C   ...me_x41p_zi3v_x86_64-pc-linux-gnu_cuda90  1491MiB |
|    1      2889      C   ...me_x41p_zi3v_x86_64-pc-linux-gnu_cuda90  1489MiB |
+-----------------------------------------------------------------------------+

So, not quite as bad. This is the Super Super-clocked 1050 version, it pulls a little more power. The GTX 950 is merely 'Super-clocked'
ID: 1929143 · Report as offensive
Previous · 1 · 2 · 3 · 4 · Next

Message boards : Number crunching : Best bang -for-buck GPU 2018?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.