GPU Wars 2016: GTX 1050 Ti & GTX 1050: October 25th

Message boards : Number crunching : GPU Wars 2016: GTX 1050 Ti & GTX 1050: October 25th
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 19 · Next

AuthorMessage
Profile shizaru
Volunteer tester
Avatar

Send message
Joined: 14 Jun 04
Posts: 1130
Credit: 1,967,904
RAC: 0
Greece
Message 1785448 - Posted: 7 May 2016, 4:17:46 UTC - in response to Message 1785396.  
Last modified: 7 May 2016, 4:19:37 UTC


They did skip a process node, and amp the memory/subsystem, so the claims seem viable. Time will tell if it's any good for compute, though the implications of some of the graphics+VR features seem to imply better/more-flexible processing.

For example they mentioned physics based audio processing for VR, which should be heavy floating point oriented, so we might get the raw TFlops we'd like.


Hmmm... reluctant to believe the numbers as far as SETI performance goes but IF I'm reading 'em correctly the 1070 (yes 1070) should be slightly better @crunching than the $999 Titan X?

@$379?

Sounds too good to be true.... Allegedly 6.5 TFLOPS.
9 TFLOPS for the 1080.

(BTW I DID get the 1070 launch date wrong, it's June 10th)
ID: 1785448 · Report as offensive
Profile shizaru
Volunteer tester
Avatar

Send message
Joined: 14 Jun 04
Posts: 1130
Credit: 1,967,904
RAC: 0
Greece
Message 1785464 - Posted: 7 May 2016, 6:09:44 UTC - in response to Message 1785436.  

maybe someone could buy the gpus at normal prices and bring them to you

It's happened in the past.


petri, Kevvy...

Maybe you guys should take the next plane to Aussieland and take your 980s with you. If you can sell 'em within the next couple of weeks you'll have scored a "free"* trip to the land down under :)

*Because whatever money you have left over from the trip, you can invest in 1070s and likely still end up with the same RAC you've got today ;)

Just thinkin' outside the box...
ID: 1785464 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1785486 - Posted: 7 May 2016, 10:12:31 UTC
Last modified: 7 May 2016, 10:15:31 UTC

Some questions

- Any ideias about their real performance in SETI? They could be monsters for gamming but...how they crunch?

- Will be the 1070 a real winner or is better to go to the 1080?

- Will be the 180W power draw (for the 1080 by the specs) on a single 8 pin conector (with his 150W normal limit) a posible source of future problems? Or they use aditional power from the PCIe slot?

- What is the the max power expected for the 1070?

I know too many questions and some only will have answers in the future.

BTW a 4x10?0 cruncher added to my christmas wishes list, just need to decide ? is 7 or 8...
ID: 1785486 · Report as offensive
Al Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1682
Credit: 477,343,364
RAC: 482
United States
Message 1785499 - Posted: 7 May 2016, 14:16:02 UTC

Juan, the only one I can answer right now is the 180w one, the PCI-E slot can provide 75w on it's own, so there should be no prob getting the 180w needed from between those 2.

My main concern reading this is, if everything that has been stated is accurate, actually has to do with availability, I can imagine the demand will be _huge_. Hope they have ramped up the production capabilities, though I expect some hiccups along the way as this is a new memory design, and also a new fab size, so hopefully the yields will be great right out of the gate.

In term of cost, those are some relatively good prices, I say relatively because Nvidia has done an amazing job of 'Applizing' it's product, that is, getting us to accept that they are worth more than they actually are. I say this from the perspective of being in computing for a lot of years, and the $200 - $300 - $500 - $695 - $1000(!)+ video card price progression has to me been nothing short of amazing to watch. And I know, I've participated in it myself from time to time, but it still is pretty breathtaking.

If these prices are pretty firm, then it seems that things may be moving in the right direction a little, though that does lead me to the question of both ends and their plans with them. What is their new Titan 10x0 going to spec out at (as to the price, well, it's a given that it will be over a grand per copy), and what is their 950 1050 going to be priced out at, and how is it going to perform for us? Actually, how much better any of them going to work in our application is an open question till they get released, but I know someone here will be on it as soon as they hit the shelves!

ID: 1785499 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1785506 - Posted: 7 May 2016, 14:31:19 UTC - in response to Message 1785499.  

Not me, after watching them release the 980 then waiting a few months to release the 980Ti as short time later with specs that equalled or passed the Titans Xs

Going to wait until they decide to show us all of their products, not just a first few that come out.
ID: 1785506 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1785516 - Posted: 7 May 2016, 15:04:48 UTC - in response to Message 1785499.  
Last modified: 7 May 2016, 15:05:46 UTC

Juan, the only one I can answer right now is the 180w one, the PCI-E slot can provide 75w on it's own, so there should be no prob getting the 180w needed from between those 2.


Thanks. Yes i know that, but even if each PCIe slot could provide 75W (by specs), only few top MB could actualy provide 4x75W for the PCIe at the same time. I learn that in the worst way, by melting the 12V line of the MB power conector.

That makes almost no diference when you do mainly gaming, but when you run the GPU for crunching at full capacity 24/7 you need to consider that.

The traditional NV (at least the mid/high end) normaly not use power from the PCIe onty from the 6/8 conectors,

Something is realy interesting, the max power usage of 180W makes all the diference if you have high electric power cost. We all know how power hungry are the GPU´s when crunching at full.

Let´s wait and see the performances as Z said, they look perfect for watercolling. Soon we will see the answers.

But certainly they are in my wishes list.
ID: 1785516 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1785523 - Posted: 7 May 2016, 15:14:48 UTC - in response to Message 1785499.  

Basically the HPC Big Pascal (P100) is definitely a thing, though being huge and depending on HBM memory, that will have the yield and cost/complexity issues for a long time yet, so is targeted to the enterprise compute line (very high cost, low volume)

For these 10x0 GPUs, the choices of GDDR5x, and GDDR5, along with smaller die, are meant for volume & optimising with minimal new technologies, so supply issues are much less likely (although GDDR5X is only made by Micron).

I believe this strategy of unexpectedly low price point for the performance (especially 1070), and large performance hike, was meant to address criticisms from the Gaming community that they were focussing on compute over graphics, while at the same time hoping to dominate in the VR and 4k gaming spaces, so as to sell volume and recoup some of the several billion dollar R&D outlay.

Big Pascal IMO, has a year or two of maturation to go through due its use of new processes and technologies, too expensive and complex just for driving VR helmets at 90Hz.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1785523 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1785527 - Posted: 7 May 2016, 15:26:39 UTC - in response to Message 1785516.  

From what it looks like to me, the 180W TDP spec is probably generous. That is a guess though, gauged from the demo which showed as boosted to ~2.1Ghz and running 65 degrees C. It feels like the stated TDP has built-in headroom.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1785527 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14649
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1785528 - Posted: 7 May 2016, 15:28:04 UTC - in response to Message 1785523.  

criticisms from the Gaming community that they were focussing on compute over graphics ...

Whereas here we complain that the driver-writing department does exactly the opposite?
ID: 1785528 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1785531 - Posted: 7 May 2016, 15:39:10 UTC - in response to Message 1785528.  

criticisms from the Gaming community that they were focussing on compute over graphics ...

Whereas here we complain that the driver-writing department does exactly the opposite?


DirectX/WDDM drivers are Microsoft's spec, and Cuda+OpenCL have to use that on Windows (complete with ~16 years of cruft). A bit different when Tesla Compute Cluster (non-display) drivers can be used. Fingers crossed both the DirectX12 and Vulkan pipelines streamline the driver complexity back as intended, so that the fragile legacy bits can be bypassed/ignored. Time will tell.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1785531 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14649
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1785534 - Posted: 7 May 2016, 15:52:21 UTC - in response to Message 1785531.  

But have you been following Jacob Klein's complaint that 364.72 (and 365.10) broke the POEM project's compute application? That's what I meant by paying more attention to gamers, and not enough to computers.
ID: 1785534 · Report as offensive
OTS
Volunteer tester

Send message
Joined: 6 Jan 08
Posts: 369
Credit: 20,533,537
RAC: 0
United States
Message 1785552 - Posted: 7 May 2016, 16:50:09 UTC - in response to Message 1782865.  

I dunno, maybe you need to switch to Linux?
The tasks at Beta were run in Ubuntu with cuda42, I just switched the Linux Mint system over to the newer cuda65 and it seems to be running about the same as the Beta tasks, http://setiathome.berkeley.edu/results.php?hostid=7258715&offset=100
I'm still surprised at how well the cuda42 App works on my 750Ti, I haven't found anything faster...except Petri's Special App.


Is "Petri's Special App" for the 750ti under Linux available for download somewhere?
ID: 1785552 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1785556 - Posted: 7 May 2016, 17:00:56 UTC - in response to Message 1785552.  

Is "Petri's Special App" for the 750ti under Linux available for download somewhere?


I believe TBar has a build (as does Petri). Caveats for wider distribution exist, I consider there needs to be some polish (documentation and a few things to resolve regarding recent linux changes). Alternatively the source is in the alpha subdirectory of the client directory, intended for roll your own use while infrastructure settles down with all the changes.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1785556 · Report as offensive
OTS
Volunteer tester

Send message
Joined: 6 Jan 08
Posts: 369
Credit: 20,533,537
RAC: 0
United States
Message 1785557 - Posted: 7 May 2016, 17:08:23 UTC - in response to Message 1785556.  

Is "Petri's Special App" for the 750ti under Linux available for download somewhere?


I believe TBar has a build (as does Petri). Caveats for wider distribution exist, I consider there needs to be some polish (documentation and a few things to resolve regarding recent linux changes). Alternatively the source is in the alpha subdirectory of the client directory, intended for roll your own use while infrastructure settles down with all the changes.



Caveats or prohibitions? I wouldn't want to do something prohibited.
ID: 1785557 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1785558 - Posted: 7 May 2016, 17:10:59 UTC - in response to Message 1785534.  

But have you been following Jacob Klein's complaint that 364.72 (and 365.10) broke the POEM project's compute application? That's what I meant by paying more attention to gamers, and not enough to computers.


I haven't no. I would likely have to see application source and detailed problem description to say with any certainty whether I think there's a driver problem, or some application quirk is part of it.

Driver issues with so much new technology being shovelled in (Linux going to virtualised/MT drivers, Cuda 8 + New GPUs, Vulkan, DirectX12...) are very likely. I can only really point out that teething problems are probably inevitable for gaming too.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1785558 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14649
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1785560 - Posted: 7 May 2016, 17:17:41 UTC - in response to Message 1785558.  
Last modified: 7 May 2016, 17:21:21 UTC

I'm not following closely either - although he asked me to reproduce his SDK sample errors, I felt the workround he requested (and got) from David went a little over the top.

The most recent update I've seen was GPUGrid 43313:

Another small update -- basically, while NVIDIA fixes the problems, they're requesting additional info to potentially make "Poem@Home" and "PrimeGrid calculation" test cases that could be used in their checklist to release new drivers. That's a GREAT idea, in my opinion :)

A manufacturer testing before release? That is a good idea.

Edit - references:

http://www.primegrid.com/forum_thread.php?id=6775
NVidia BUG ID #1754468
ID: 1785560 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1785563 - Posted: 7 May 2016, 17:23:13 UTC - in response to Message 1785557.  
Last modified: 7 May 2016, 17:45:12 UTC

Is "Petri's Special App" for the 750ti under Linux available for download somewhere?


I believe TBar has a build (as does Petri). Caveats for wider distribution exist, I consider there needs to be some polish (documentation and a few things to resolve regarding recent linux changes). Alternatively the source is in the alpha subdirectory of the client directory, intended for roll your own use while infrastructure settles down with all the changes.


Caveats or prohibitions? I wouldn't want to do something prohibited.


The only (not really a)'prohibition' is the reasonable request Petri personally made that when I use the code toward a Windows release build that I Have one for Linux at the same time, Which fits well with the way I was headed to gradually have Windows, Mac and Linux in lockstep build and regression test automation anyway.

While Linux kernel/driver/dependency changes are causing some grief, and all three platforms are in a bit of flux, or rather convergence, I placed the code in Alpha for holding. In the meantime I personally don't mind those with sufficient know-how experimenting with that, since It looks like I'll be occupied with infrastructure/changes for some time, and having some know all the pros and cons of the newer techniques just builds ammunition for refinement as the development and deployment pipeline becomes fully automated (i.e. faster).
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1785563 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1785567 - Posted: 7 May 2016, 17:42:11 UTC - in response to Message 1785560.  
Last modified: 7 May 2016, 17:43:14 UTC

I'm not following closely either - although he asked me to reproduce his SDK sample errors, I felt the workround he requested (and got) from David went a little over the top.

The most recent update I've seen was GPUGrid 43313:

Another small update -- basically, while NVIDIA fixes the problems, they're requesting additional info to potentially make "Poem@Home" and "PrimeGrid calculation" test cases that could be used in their checklist to release new drivers. That's a GREAT idea, in my opinion :)

A manufacturer testing before release? That is a good idea.

Edit - references:

http://www.primegrid.com/forum_thread.php?id=6775
NVidia BUG ID #1754468


In my experience, nVidia are exceptionally responsive that way - We've directly affected their QA processes in three specific examples I know of through my account. In current Xbranch case, we're more sensitive to changes related to the Cuda Runtime or CUFFT Libraries, than specificly driver level (with some exceptions). That may well change with all the new stuff, or adding other languages/apis and techniques though, so I'll consider opening up the automated regression testing (to them and others) as the infrastructure pads out.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1785567 · Report as offensive
Al Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1682
Credit: 477,343,364
RAC: 482
United States
Message 1785572 - Posted: 7 May 2016, 18:03:18 UTC

Probably an uninformed question, but is CUDA basically dead? I know it was discussed in a prior thread that I can't seem to track down now, but I remember how back in the day CUDA was highly optimized for NVidia's cards and they were working with our developes for a short time, and therefore helped their cards work quite well doing SETI crunching. But OpenCL and now Vulcan is out, am I correct in thinking that OpenCL is trying to unify somewhat the drivers between ATI and Nvidia, and that Vulcan is the new CUDA? Any thoughts on it are appreciated!

ID: 1785572 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1785582 - Posted: 7 May 2016, 18:30:22 UTC - in response to Message 1785572.  

Probably an uninformed question, but is CUDA basically dead? I know it was discussed in a prior thread that I can't seem to track down now, but I remember how back in the day CUDA was highly optimized for NVidia's cards and they were working with our developes for a short time, and therefore helped their cards work quite well doing SETI crunching. But OpenCL and now Vulcan is out, am I correct in thinking that OpenCL is trying to unify somewhat the drivers between ATI and Nvidia, and that Vulcan is the new CUDA? Any thoughts on it are appreciated!


No they're all related :) There is convergence on best solutions, but for the most part the languages/apis evolve rather than die.

All Vulkan really represents is a cross platform low level API for Graphics & Compute, and its shader & compute 'language' (SPIR-V) as such is not really intended to be used directly. OpenCL, OpenGL/GLSL, or any other graphics shader or compute language can generate SPIR-V through Vulkan.

Cuda provides a more high level set of libraries and programming API, While OpenCL didn't specifically integrate graphics or a minimum feature set. Also Cuda is more about a virtual machine specification (containing Cuda Cores and other Components) Than its language.

Consider Vulkan as primarily there to give developers low level access to hardware ( with all the extra responsibility and risks that go along with that).

In that sense you'll see what language you choose to use matter less, since crafting 'Domain specific Languages' that generate compute/graphics code will become the next big thing, as opposed to using CUda, OpenCL, or anything else that was never made specifically to suit what we need to do.

That I guess will only make more sense as things settle down and mature, but the general gist is the kinds of fourier analyses we are doing has its own language, and the devices have their own ( PTX in the case of Cuda GPUs). By Moving to domain specific languages for the applications (Away from Cuda or OpenCL hardware specifics), and install-time and runtime generation of code (like Mobiles do), you remove that artificial layer that blocks porting and optimisation, and replace it with portable and hardened well optimised components.

Probably will take a couple of years for the full ecosystem to take hold, though a lot of the components have evolved and cross-pollinated over time.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1785582 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 19 · Next

Message boards : Number crunching : GPU Wars 2016: GTX 1050 Ti & GTX 1050: October 25th


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.