GPU Wars 2016: GTX 1050 Ti & GTX 1050: October 25th

Message boards : Number crunching : GPU Wars 2016: GTX 1050 Ti & GTX 1050: October 25th
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 14 · 15 · 16 · 17 · 18 · 19 · Next

AuthorMessage
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1799534 - Posted: 30 Jun 2016, 1:20:19 UTC - in response to Message 1799524.  

And now a GTX1060 has been spotted in Hong Kong...

Given the difficulties in laying hands on GTX 1080s & 1070s I think it'll be a while before we see many GTX 1060s
Grant
Darwin NT
ID: 1799534 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19062
Credit: 40,757,560
RAC: 67
United Kingdom
Message 1799875 - Posted: 1 Jul 2016, 9:27:49 UTC

AMD RX480 power consumption, reports of exceeding PCIe 75W limit, also card exceeds 150W limit on 6pin connector as well. Those who have tried overclocking reports of over 200W consumption.

This one respected sites report, but their are others. Toms Hardware - AMD Radeon RX 480 8GB Review - Power consumption
ID: 1799875 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1800075 - Posted: 2 Jul 2016, 5:33:34 UTC - in response to Message 1799534.  
Last modified: 2 Jul 2016, 5:34:09 UTC

And now a GTX1060 has been spotted in Hong Kong...

Given the difficulties in laying hands on GTX 1080s & 1070s I think it'll be a while before we see many GTX 1060s

I may be wrong-
Rumours are that the GTX 1060 is meant to be launched on the 7th of July, available for purchase from the 14th of July.
Will be interesting to see just how it performs, just how available it is; and most interestingly, how will they price it? In line with the GTX 1070/1080 pricing, or match the RX 480 pricing to try & hurt AMD as much as possible?
Grant
Darwin NT
ID: 1800075 · Report as offensive
[VENETO] boboviz
Volunteer tester

Send message
Joined: 5 Oct 99
Posts: 16
Credit: 613,159
RAC: 0
Italy
Message 1800090 - Posted: 2 Jul 2016, 10:01:21 UTC - in response to Message 1799875.  

AMD RX480 power consumption, reports of exceeding PCIe 75W limit, also card exceeds 150W limit on 6pin connector as well. Those who have tried overclocking reports of over 200W consumption.


Custom version will have, probably, a 8-pin connector

Rx470 is my target, but i don't know when amd will release it
ID: 1800090 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19062
Credit: 40,757,560
RAC: 67
United Kingdom
Message 1800094 - Posted: 2 Jul 2016, 11:44:38 UTC - in response to Message 1800090.  
Last modified: 2 Jul 2016, 12:01:28 UTC

AMD RX480 power consumption, reports of exceeding PCIe 75W limit, also card exceeds 150W limit on 6pin connector as well. Those who have tried overclocking reports of over 200W consumption.


Custom version will have, probably, a 8-pin connector

Rx470 is my target, but i don't know when amd will release it

There is a YouTube - PCPer - AMD Radeon RX 480 Power Concerns - Detailed Analysis video that gives a reasonable explanation of the problem.

As it stands at the moment I would say AMD needs to lower the power pulled from the PCIe connector and change the 6pin to an 8 pin. The 8 pin is probably needed to account for the people who overclock.
AMD also need to re-asses the published TDP figure if they are now able to lower it for all games.

Those who think that using 2 of these cards in crossfire**, because it is cheaper than one Nvidia 1080 need to take into consideration the power consumption for the likely time they will use these cards.

**N.B. they also need to check that the games they run can actually scale up and use crossfire.

One issue I have problems with is over the power spikes that these two guys say is not a problem. As a one time designer of SMPS and DC-DC converters is that those spike will have detrimental effect on the motherboard power components in the long term. Power MOSFET's for instance are made of thousands of tiny mosfets. Like a chain the weakest of these will possibly fail when stressed, and therefore more stress will be placed on the survivors, leading to a domino effect over time.
ID: 1800094 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1800126 - Posted: 2 Jul 2016, 15:47:54 UTC - in response to Message 1800094.  
Last modified: 2 Jul 2016, 15:53:17 UTC

The problem I have with all of this, is it *feels* like when the NV GTX 480 came out ( 480 numerical similarity I'm sure being coincidental). That is to say, The nv 480 (fermi) came out ahead of its time, broke a lot of rules, power and noise envelopes included, but proved to be a formidable cruncher. Short story is try to take the hysteria for what it is, and assess it on its actual real world performance.

[Edit:] I don't think we need to be social justice warriors defending brands here. The confusion is justified and real. Unless they come up with something better, IMO both main parties are dropping the ball for us, on purpose.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1800126 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1800137 - Posted: 2 Jul 2016, 16:15:30 UTC
Last modified: 2 Jul 2016, 16:16:27 UTC

[tweeted]"@JenHsunHuang Hi Mr Huang. We have a dilemma in the the compute world at the moment. Your gaming hysteria is muddying real compute concerns"
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1800137 · Report as offensive
Profile arkayn
Volunteer tester
Avatar

Send message
Joined: 14 May 99
Posts: 4438
Credit: 55,006,323
RAC: 0
United States
Message 1800172 - Posted: 2 Jul 2016, 20:40:35 UTC

I'm happy with my GTX-1070.

ID: 1800172 · Report as offensive
archae86

Send message
Joined: 31 Aug 99
Posts: 909
Credit: 1,582,816
RAC: 0
United States
Message 1800174 - Posted: 2 Jul 2016, 20:49:48 UTC

I received a GTX 1070 Founders Edition card almost two weeks ago, and have since then been running Einstein BRP6/CUDA55 work on this host, and have discussed my Einstein results in this Einstein forum thread.

I've about reached the end of tinkering, and plan shortly to add back into the PC a 750Ti card, both to see how much of the 1070 productivity survives sharing the CPU resource and downgrading the PCI-E link width from x16 to x8, and just to get higher total output (back to supporting Einstein by doing work, not by sharing experimental results).

Today I dedicated a few hours on the same box running SETI. I kept the same "safe but serious" overclock I took several days to find for Einstein, and I continue to have a moderately boosted fan curve (which affects performance as the card automatically changes core clock speed in response to temperature, even well below the widely-known 83C temperature). I discuss the overclocking in detail in the Einstein thread. Here I'll just say that while the card actually runs BOINC work in the P2 state, the method I found a way to use was by specifying clock offsets for the P0 state using Nvidia Inspector with no BOINC GPU jobs running (and the card thus reported to be in the P8 state). The actual long-term running clock rates during processing were reported by GPU-Z as 2062.5 core clock, and 2304.0 memory clock

I had previously run a little using the SETI stock applications. Today I first installed the Lunatics V0.44 package, but botched the install in that I initially selected CUDA32, and also took a while to find the means to run three tasks on the GPU at once. A reinstall got me to the intended CUDA50. Subsequently I did the V0.45 Beta 3 install, selecting the SoG option, and redoing the count parameter edits so things again ran 3X.

I did not collect enough data before the Beta3/SoG runs to give useful quantitative results, but did observe a clear immediate increase in power consumption and GPU temperature, and rate of progress in half done work when switching from cuda50 to SoG. The difference was large, I'd guess a factor of two in elapsed time.

In attempting useful timing runs, I thought it best to try to run like with like. For example, a mixed load of guppi and ordinary tasks will not have either flavor running in the elapsed times shown by "pure" sets.

Recalling that this is a 1070 running at substantial overclock, on a pretty capable (quad-core Haswell) host with nothing else to do, my observed elapsed times in all cases running three like tasks simultaneously (with a link to a characteristic task) were:

blc3_2bit_guppi about 26 minutes sample
ordinary V8 about 7 minutes sample
high CPU V8 just over 14 minutes sample

I don't know what the distinction is between the tasks I've designated as ordinary vs. high CPU V8, but they arrived on my system with considerably different execution time estimates, and while the ordinary ones had about 17% of a CPU core charged to them for support, the "high CPU ones" have 98% of a CPU charged.

I have no idea whether these are good results or poor. I did not venture into command-line parameter imposition or other tuning save setting the tasks to run three at a time, and using task suspension to gather several sequential runs of like-to-like and paying attention to the middle of the set.
ID: 1800174 · Report as offensive
Al Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1682
Credit: 477,343,364
RAC: 482
United States
Message 1800184 - Posted: 2 Jul 2016, 21:28:16 UTC
Last modified: 2 Jul 2016, 22:11:10 UTC

Hi Arch, I've been running my new 1080FTW for about a week now, in this system, I have it overclocked, the GPU is at 2050, the memory is at 4513. The power target in Precision X OC is at 117%, temp target is 91, GPU clock offset is +35mhz and Memory clock offset is +16. Voltage is stock at 1062. I don't think I am really pushing it very hard right now, at least if I am going by the temps. It is about 75 degrees F in the house, and it is running between 34-36C. I am currently running the fan at 100% to prevent any downclocking, though with the temps I am seeing I think I'll be backing it down slowly to see what effect it has on them. This is the ACX 3.0 version of the card, and it seems to be doing a pretty good job. My 980Ti consistently runs in the mid 50s to upper 60s depending on room temp, GPU load and how hard the other GPU's near it are running. Thought I'd toss that info out to you so you can get a comparison to your card, even if it's comparing a 1070 to a 1080. Let me know if there's any other info you'd like to know. Oh, and I tried hitting the KBoost button on Precision to see what it did, tells me that I need to close out 3D applications like IE and BOINC. Huh, interesting, I guess I'll be passing on that for the time being. I am also adding a couple of screenshots of some tasks that have completed and that are in progress to my thread I started when I set up that system. They should be posted soon, so you can compare your results to what mine is doing currently running the latest Lunatics SoG install.

ID: 1800184 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1800212 - Posted: 2 Jul 2016, 23:20:28 UTC - in response to Message 1800126.  
Last modified: 2 Jul 2016, 23:21:26 UTC

The problem I have with all of this, is it *feels* like when the NV GTX 480 came out ( 480 numerical similarity I'm sure being coincidental). That is to say, The nv 480 (fermi) came out ahead of its time, broke a lot of rules, power and noise envelopes included, but proved to be a formidable cruncher. Short story is try to take the hysteria for what it is, and assess it on its actual real world performance.

The GTX 480 ran hot & used a lot of power, but it was still within the specs of the PCIe & auxiliary power connectors?

If motherboards start dying after 6, 12, 24 months because they have to supply more power than they were designed to, that is a bad thing.
If you look at the graphs at Tom's Hardware Guide for the RX480 & the GTX 1080 you'll see that both cards have quite a few very high spikes, however the GTX1080 doesn't have as many, or as high (above the power rating).
And the RX480 never drops down to the 75W limit for the PCIe slot, it's well above it for the whole run.

Usually for a particular power supply rating, if the nominal load is say 100W, it would generally be designed to supply above that for a certain period of time, how much & for how long would depend on a lot of factors.
An example would be
100W nominal load.
10% overload for 5min
50% overload for 5 seconds
100% overload for 0.5 seconds.

The 100% overload would be (for example) to meet the startup load of a motor, and there would be a minimum time between starts. If there were to be extended periods of heavier than rated loads, or more frequent starts, then an higher rated power supply would be necessary.

As others have mentioned, 2* 6pin auxiliary connectors or 1* 8pin connector should resolve the problem.
All of the reviews I've seen so far look at gaming performance, i'm still looking for some that show Compute performance.


And the word from AMD-

From the TechReport Statement from AMD
As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, we can adjust the GPU's tuning via software in order to resolve this issue. We are already testing a driver that implements a fix, and we will provide an update to the community on our progress on Tuesday (July 5, 2016).
Grant
Darwin NT
ID: 1800212 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1800255 - Posted: 3 Jul 2016, 2:03:55 UTC - in response to Message 1800172.  

I'm happy with my GTX-1070.


Looks like the sweet spot for the moment. My dilemnas relate to Cuda 8 being the last to support Fermi class (i.e. GTX 480 through 5x0 and some OEM with later numbers), deprecation of which seems (to me) a little too soon after the same on tesla based and 32 bit support in Cuda 6.5+.

Understandable in a pure gaming context, but not to me in a GPU acceleration context where we can be talking about otherwise serviceable laptops, or in our specialised crunching case further fragmenting the already confusing (to users) build landscape.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1800255 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1800256 - Posted: 3 Jul 2016, 2:05:36 UTC - in response to Message 1800212.  

...
And the word from AMD-

From the TechReport Statement from AMD
As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, we can adjust the GPU's tuning via software in order to resolve this issue. We are already testing a driver that implements a fix, and we will provide an update to the community on our progress on Tuesday (July 5, 2016).


Ugh, that would seem to be a QA/QC problem then. Well I hope they don't have to knobble them too much...
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1800256 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19062
Credit: 40,757,560
RAC: 67
United Kingdom
Message 1801057 - Posted: 6 Jul 2016, 11:45:27 UTC

Report on RX 480 new Driver, says it will reduce power consumption, pushing the power draw from the PCIe slot below the 75W threshold, and increase performance.

http://wccftech.com/amd-rx-480-power-update/

AMD’s full statement – published 00:17 AM on Wednesday July the 6th, 2016

We promised an update today (July 5, 2016) following concerns around the Radeon™ RX 480 drawing excess current from the PCIe bus. Although we are confident that the levels of reported power draws by the Radeon RX 480 do not pose a risk of damage to motherboards or other PC components based on expected usage, we are serious about addressing this topic and allaying outstanding concerns. Towards that end, we assembled a worldwide team this past weekend to investigate and develop a driver update to improve the power draw. We’re pleased to report that this driver—Radeon Software 16.7.1—is now undergoing final testing and will be released to the public in the next 48 hours.

In this driver we’ve implemented a change to address power distribution on the Radeon RX 480 – this change will lower current drawn from the PCIe bus.
Separately, we’ve also included an option to reduce total power with minimal performance impact. Users will find this as the “compatibility” UI toggle in the Global Settings menu of Radeon Settings. This toggle is “off” by default.

Finally, we’ve implemented a collection of performance improvements for the Polaris architecture that yield performance uplifts in popular game titles of up to 3%. These optimizations are designed to improve the performance of the Radeon RX 480, and should substantially offset the performance impact for users who choose to activate the “compatibility” toggle.

AMD is committed to delivering high quality and high performance products, and we’ll continue to provide users with more control over their product’s performance and efficiency. We appreciate all the feedback so far, and we’ll continue to bring further performance and performance/W optimizations to the Radeon RX 480.
ID: 1801057 · Report as offensive
Profile shizaru
Volunteer tester
Avatar

Send message
Joined: 14 Jun 04
Posts: 1130
Credit: 1,967,904
RAC: 0
Greece
Message 1801059 - Posted: 6 Jul 2016, 12:20:25 UTC - in response to Message 1801057.  

ID: 1801059 · Report as offensive
Profile shizaru
Volunteer tester
Avatar

Send message
Joined: 14 Jun 04
Posts: 1130
Credit: 1,967,904
RAC: 0
Greece
Message 1801268 - Posted: 7 Jul 2016, 9:47:28 UTC - in response to Message 1801059.  

Ouch?


Appears to be a bit of an "ouch" in the sense that while not "too big a deal" there'll probably be tons of moaning going on, on the internet. Anyway, read the comments in the Anand link and found one that maybe best explains what's going on (for now).

by tipoo:
"Seems like the fix and lower power toggle all but confirms a last minute overclock. They also say the 3% boost in games the new driver provides should substantially offset any impact from the lower energy setting, so it seems pretty sure to me they got worried and pushed it past what they found was it's peak efficiency point, or at least the better tradeoff of efficiency to performance (since power use scales more than linearly with voltage+frequency, while performance scales less than linearly with clock), in favor of boosting it a bit more.

All that debacle didn't seem worth it if it's 3-5% they squeezed out."
ID: 1801268 · Report as offensive
archae86

Send message
Joined: 31 Aug 99
Posts: 909
Credit: 1,582,816
RAC: 0
United States
Message 1801305 - Posted: 7 Jul 2016, 16:30:10 UTC

While there is precious little of detail, initial prices have been announced for the forthcoming GTX 1060 cards. These are said to employ a GP106 chip, a smaller die with much design similarity to the first two Pascal family card GTX 1080 and 1070 which use the GP104 chip. A suggested $250 base MSRP, with the "Limited Founders Edition" about $300. Initial shipment release is July 19.
videocardz initial 1060 stuff
gamersnexus 1060 stuff
Happily the announced memory speed is better than some rumors, 8GBPs GDDR5 192-bit, so just 0.75 of 1070 by width, with no loss in stated speed. (and that speed has a lot of overclocking headroom on my sample of the 1070)

I think this might do pretty well over at Einstein, as BRP6 throughput seems not to scale fully with available core count, so the performance may not go down so much as a the reduction from 1920 on the GTX 1070 to 1280 on the GTX 1060 suggests. I'm hoping for three-quarters of the performance for two-thirds the price, and suspect it will more easily stay cool enough in my less capable cases. I don't have much of an idea how it might fare here at SETI

I've seen a claim that the review embargo release date is 9 a.m. July 19. If so people won't have reviews to go on in making up their minds in advance of shipping release. Maybe this will take some of the zest out of the first-day frenzy. On the other hand if they pulled in the originally planned release date, supply may be even more limited.
ID: 1801305 · Report as offensive
Al Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1682
Credit: 477,343,364
RAC: 482
United States
Message 1801326 - Posted: 7 Jul 2016, 19:06:05 UTC

Intersting news, now all we need to fill out the upper end to bottom end of the lineup is releasing info on the GTX 1050 cards. My guess is the top end Big Pascal lineup of cards is going to happen sometime later this fall, but I hope they have a repeat of the success with the 1050 like they did with the 750Ti in terms of performance/watt. Personally haven't head anything about it yet, also hoping the price eventually settles into the same range as well (~$120 or so), though that may take a little while.

ID: 1801326 · Report as offensive
Profile shizaru
Volunteer tester
Avatar

Send message
Joined: 14 Jun 04
Posts: 1130
Credit: 1,967,904
RAC: 0
Greece
Message 1801369 - Posted: 7 Jul 2016, 23:03:30 UTC - in response to Message 1801305.  

Thank you for the heads-up archae!

Nvidia meanwhile made the announcement official & Launch Date will be July 19th

http://www.anandtech.com/show/10474/nvidia-announces-geforce-gtx-1060-july-19

http://www.geforce.com/hardware/10series/geforce-gtx-1060
ID: 1801369 · Report as offensive
Profile shizaru
Volunteer tester
Avatar

Send message
Joined: 14 Jun 04
Posts: 1130
Credit: 1,967,904
RAC: 0
Greece
Message 1801375 - Posted: 7 Jul 2016, 23:05:49 UTC
Last modified: 7 Jul 2016, 23:06:22 UTC

Title change
(apologies for double post)
ID: 1801375 · Report as offensive
Previous · 1 . . . 14 · 15 · 16 · 17 · 18 · 19 · Next

Message boards : Number crunching : GPU Wars 2016: GTX 1050 Ti & GTX 1050: October 25th


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.