1080 underclocking

Message boards : Number crunching : 1080 underclocking
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 11 · Next

AuthorMessage
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1810766 - Posted: 20 Aug 2016, 16:50:51 UTC - in response to Message 1810756.  


One of those cases I would like to come across in one of the Cuda manuals or somesuch, though still working through the latest (8.0rc) updates haven't spotted a specific mention yet.

Jason, does this new CUDA 8.0rc represent the mainstream development for the CUDA Seti apps and is it a separate fork from the specialized CUDA 7.5 fork of Petri's app or does it tie both together?
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1810766 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1810768 - Posted: 20 Aug 2016, 16:53:53 UTC - in response to Message 1810765.  
Last modified: 20 Aug 2016, 16:54:58 UTC

Hi,

My settings are a workaround to make P2 performance equal P0.
The latest driver does not copy P0 settings to P2 but the one at the time of the initial release (late May/early June) does. That is why I do not use the latest driver.

A couple of years ago I bought a 780 and it did the same thing i.e. P2 on compute. The later drivers fixed that. I'm waiting for a new driver that will allow P0 for compute workloads.

Not sure how I am able to achieve my P2 settings then as I am using the latest Nvidia Windows 7 driver 372.54.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1810768 · Report as offensive
Profile petri33
Volunteer tester

Send message
Joined: 6 Jun 02
Posts: 1668
Credit: 623,086,772
RAC: 156
Finland
Message 1810770 - Posted: 20 Aug 2016, 17:06:07 UTC - in response to Message 1810768.  

Hi,

My settings are a workaround to make P2 performance equal P0.
The latest driver does not copy P0 settings to P2 but the one at the time of the initial release (late May/early June) does. That is why I do not use the latest driver.

A couple of years ago I bought a 780 and it did the same thing i.e. P2 on compute. The later drivers fixed that. I'm waiting for a new driver that will allow P0 for compute workloads.

Not sure how I am able to achieve my P2 settings then as I am using the latest Nvidia Windows 7 driver 372.54.


To start with you could experiment with an earlier version available from NVIDIA. I'm not sure if you have the nvidia-settings.exe and nvidia-smi.exe in windows. Or if you can use a more advanced tool to set up NV parameters.

A windows guru could help here.
To overcome Heisenbergs:
"You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones
ID: 1810770 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1810775 - Posted: 20 Aug 2016, 17:38:02 UTC - in response to Message 1810766.  
Last modified: 20 Aug 2016, 17:44:21 UTC


One of those cases I would like to come across in one of the Cuda manuals or somesuch, though still working through the latest (8.0rc) updates haven't spotted a specific mention yet.

Jason, does this new CUDA 8.0rc represent the mainstream development for the CUDA Seti apps and is it a separate fork from the specialized CUDA 7.5 fork of Petri's app or does it tie both together?


Pretty simple question, that has some fairly long answers involved (sorry, TL;DR version at bottom)

X41 series is basically finished/closed, apart from if there are any major urgent bugs/workarounds requiring replacement of existing stock/installer builds. It'll be used as 'baseline' for new designs (described later) in terms of its strengths, a lot of hard lessons learned that will need to be carried forward.

Performance challenges were brought to the fore by new GBT work Xbranch was never designed to work with (a void which for many the OpenCL apps are filling for the moment, Kudos to Raistmer and others involved there)

Petri's huge amount of work toward performance, specifically making use of newer/better techniques, hardware, and libraries, exposes some serious infrastructure problems inherited from the original codebase (nv's contribution). More precisely maintaining multiple build systems per platform, each 'kindof wonky' in their own way, especially with modernising cross platform development in a reliable/consistent way. That's partly a complex mix of deprecations on all sides (mostly nvidia) that differs for every platform in nasty ways.

Petri's still working on his end, which I currently maintain in Berkeley's SVN under an alpha folder. This work frees me up from some of the detail, enough to consider bigger picture things, such as rectifying the long standing Cross-platform issues (among others)

As this improves, likely I'll make available some 'Advanced User', limited to specific hardware, wider alpha test builds, as a test of a completely new build system (Gradle automation)

The reason for the limited build -> extended alpha, apart from that validation needs work, relates to integration problems I ran into on Windows builds, in that I require a unified codebase for older and newer hardware, such that build count can be reduced, or at least kept low. (fewer points of failure, simplified user experience, easier deployment, etc.... long list)

In light of all that I decided to commence a long planned 'clean slate', that was put off while the status quo was fairly stable. X42 infrastructure is under construction on my Github. Currently I'm focusing on getting the new build system working with small test pieces, such that the 3 main platforms will be able to have builds made simultaneously, run prescribed regression tests, and ultimately deploy with minimal intervention.

[TL;DR] The x41 apps outgrew what they were designed to do, and wedging in the new technologies is not going as smoothly as it should, due to its age compared against modern needs/expectations and tools. So new 'special' apps will begin to circulate, for poking out issues, while a complete redesign and re-implementation takes place (X42)... Mostly in advance preparation for Volta.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1810775 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1810793 - Posted: 20 Aug 2016, 18:48:04 UTC - in response to Message 1810770.  
Last modified: 20 Aug 2016, 18:48:57 UTC

Hi,

My settings are a workaround to make P2 performance equal P0.
The latest driver does not copy P0 settings to P2 but the one at the time of the initial release (late May/early June) does. That is why I do not use the latest driver.

A couple of years ago I bought a 780 and it did the same thing i.e. P2 on compute. The later drivers fixed that. I'm waiting for a new driver that will allow P0 for compute workloads.

Not sure how I am able to achieve my P2 settings then as I am using the latest Nvidia Windows 7 driver 372.54.


To start with you could experiment with an earlier version available from NVIDIA. I'm not sure if you have the nvidia-settings.exe and nvidia-smi.exe in windows. Or if you can use a more advanced tool to set up NV parameters.

A windows guru could help here.


nvidia-smi.exe is present on my Win7x64 system [ C:\Program Files\NVIDIA Corporation\NVSMI], along with a PDF of man page style help, and some other binaries I've not examined yet. This is with driver 365.19, which IIRC was installed with a prerelease version of Cuda 8 toolkit.

It appears to expose the full NVML interface including at least some settings, though I've not played with it, as been using nVidia Inspector.

Seperate possible development interest, is there are supposedly python bindings to the same NVML interface, which seems to replace the old nvapi C library on Windows, bringing things in line with Linux & possibly Mac (to be confirmed). If that proves to be the case, some crunching oriented dedicated tools seem more possible/practical, given nice GUI tools are hard to come by on Mac + Linux at present, while the Windows ones tend to be more gaming/graphics oriented.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1810793 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1810798 - Posted: 20 Aug 2016, 19:45:08 UTC - in response to Message 1810793.  

Thanks for that great explanation, Jason. I didn't know that NVSMI document was installed. Looks like another good way for command line or batch file alteration of power states for Kepler+ families. Though Nvidia Inspector is still a simple tool to use currently.

I sense a foreboding of much greater challenges that I interpret as a sea-change that the Volta generation is going to cause. Am I correct in that assumption?
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1810798 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1810802 - Posted: 20 Aug 2016, 20:06:54 UTC - in response to Message 1810798.  

Thanks for that great explanation, Jason. I didn't know that NVSMI document was installed. Looks like another good way for command line or batch file alteration of power states for Kepler+ families. Though Nvidia Inspector is still a simple tool to use currently.

I sense a foreboding of much greater challenges that I interpret as a sea-change that the Volta generation is going to cause. Am I correct in that assumption?


I think so. Things were pretty great (i.e Simple) up to small Kepler GPUs (~GTX 680) generation, but Big K (780 onwards) basically changes everything. Petri's certainly seeming to come to grips with the architecture shifts, and extracting throughput from them. That's a burden I'm happy to have lifted from myself, such that I can prepare the increasingly important 'other bits'

The appearance of this NVML interface on Windows is also currently fascinating me, in particular with its mentioned Python bindings. Will likely be able to make heavy use of that within the automated testing, and maybe even offer some compute centric GUI tools down the road. Kindof suggests we might be able to 'de-nerdify' some of the complex benching/regression/monitoring/user-interfaces in ways that could end up less clunky and more useful. Still analysing what might be feasible, but components like that might bring 'proper' heterogeneous apps a little closer.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1810802 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1810832 - Posted: 20 Aug 2016, 21:43:18 UTC - in response to Message 1810693.  

If the card is well under it's maximum possible thermal load, and well under it's maximum possible power load then there is no reason for it to drop it's clock speed down.


But you actually don't know that.

As I indicated in my msg - 1807934 the GPU might have reached it's limit on one or more power input pins, while others, like the video output, because in our BOINC/Seti crunching we don't use it, are drawing the minimum power necessary.

And that would indicate a design flaw, and under Australian Consumer legislation people would (after a lot of time & effort, naturally) be able to get a refund on their purchase as it doesn't meet the claims of the sales literature.
If the reported power & temperature readings are well below the rated limit for the card, there is no good reason for it not to sustain it's rated Boost speed.

I noticed in the opening post that Zalster has modified the cooling on the card- are the onboard regulators receiving enough cooling as a result of this? is one thought that comes to mind.


Also on temps, there are probably more than one sensors, probably all connected to the same circuit which interrupts overclocking, these sensors act immediately, much faster than the one that measures and reports the GPU temperature. So again you don't know that one small part of the GPU has reached its temperature limit, except for the fact that the GPU has reduced its clock speed.

Possible, but unlikely IMHO.
In the case of Intel CPUs you mentioned, as well as the case temperature, there are also sensors for each core which are readable by most hardware monitoring software.
If there were multiple GPU temperature sensors, I would expect them to be displayed by such hardware.
Grant
Darwin NT
ID: 1810832 · Report as offensive
Profile petri33
Volunteer tester

Send message
Joined: 6 Jun 02
Posts: 1668
Credit: 623,086,772
RAC: 156
Finland
Message 1810835 - Posted: 20 Aug 2016, 21:53:17 UTC - in response to Message 1810832.  

If the card is well under it's maximum possible thermal load, and well under it's maximum possible power load then there is no reason for it to drop it's clock speed down.


But you actually don't know that.

As I indicated in my msg - 1807934 the GPU might have reached it's limit on one or more power input pins, while others, like the video output, because in our BOINC/Seti crunching we don't use it, are drawing the minimum power necessary.

And that would indicate a design flaw, and under Australian Consumer legislation people would (after a lot of time & effort, naturally) be able to get a refund on their purchase as it doesn't meet the claims of the sales literature.
If the reported power & temperature readings are well below the rated limit for the card, there is no good reason for it not to sustain it's rated Boost speed.

I noticed in the opening post that Zalster has modified the cooling on the card- are the onboard regulators receiving enough cooling as a result of this? is one thought that comes to mind.


Also on temps, there are probably more than one sensors, probably all connected to the same circuit which interrupts overclocking, these sensors act immediately, much faster than the one that measures and reports the GPU temperature. So again you don't know that one small part of the GPU has reached its temperature limit, except for the fact that the GPU has reduced its clock speed.

Possible, but unlikely IMHO.
In the case of Intel CPUs you mentioned, as well as the case temperature, there are also sensors for each core which are readable by most hardware monitoring software.
If there were multiple GPU temperature sensors, I would expect them to be displayed by such hardware.


Hi,

With the Gtx1080 comes a handy installation manual. There is a statement that says that You can and should use a screwdriver to remove a part of the next gtx1080 backplate to enable better cooling.

So, heat may be a problem. The High end pascal cards (Titan and the workstation cards) have a lot lower MHz.
To overcome Heisenbergs:
"You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones
ID: 1810835 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1810836 - Posted: 20 Aug 2016, 21:56:02 UTC - in response to Message 1810737.  

Coming late to this discussion because I just picked up two 1070 and needed to see the current ideas of how to get the cards to run closer to stock P0 settings for distributed computing. Looks like Nvidia has hamstrung their GPGPU performance yet again like with Maxwell. So, I attacked the problem as is did with my 970 by using Nvidia Inspector. Added a mild +50 Mhz to core speed and a +400 Mhz to memory speed. GPU-Z has the cards running at 1923 and 1911 Mhz on the core clock and effective memory clock speed of 8400 Mhz. I am happy that I could still use the existing tools to get the card running optimally for GPGPU computing for the SETI project.


My GTX 1070 has a Gaming mode; 1594 Base, 1784 Boost, and an Overclocking mode; 1620 Base, 1822 Boost.
It's been running at 1923.5MHz since I installed it.
However the memory clock is reported as 1,901.2MHz, which is effectively 7,605MHz. Well down from the claimed 8,008MHz.
Given the high GPU clock speed & everything is stable, I haven't been too fussed about the memory speed.
Grant
Darwin NT
ID: 1810836 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1810841 - Posted: 20 Aug 2016, 22:04:00 UTC - in response to Message 1810835.  

So, heat may be a problem. The High end pascal cards (Titan and the workstation cards) have a lot lower MHz.

Possibly, as those cards do have a lot more active silicone than the lesser models. However i'd still expect it to be reflected in the reported temperatures.

It will be interesting to see how my card goes once the temperatures warm up here again. Lately it's only been high 20's to 30°c. When it gets into the low to mid 30°s we'll see just how well the card copes. With the huge heatsink & triple fans it should be OK.
Grant
Darwin NT
ID: 1810841 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1810849 - Posted: 20 Aug 2016, 22:34:35 UTC - in response to Message 1810832.  

I noticed in the opening post that Zalster has modified the cooling on the card- are the onboard regulators receiving enough cooling as a result of this? is one thought that comes to mind.


I totally admit to modifying that 1080. I left it unmodified for a while but didn't like the constant 80C temp

So I removed 2/3 of the shroud and installed a hybrid onto that card. That moved the temperature down to 43C and it's stayed there. The original Fan is still in place and functioning and the hybrid is over the Chip with a fan and radiator.

Still don't know why it is downclocking as the temps are way below what they consider the upper limit. I'm got it OC by precision X and that keeps it at the boost speed of 1.96GHz

Zalster
ID: 1810849 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1810852 - Posted: 20 Aug 2016, 22:41:41 UTC - in response to Message 1810849.  

The original Fan is still in place and functioning

So the onboard regulators are still getting plenty of cooling.
Generally if they were to start to overheat i'd expect lockups, errors or other weird behaviour due to poor regulation, voltages would most likely go higher or become noisy. Not necessarily down clocking.
But weirder things have happened.
Grant
Darwin NT
ID: 1810852 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1810856 - Posted: 20 Aug 2016, 22:59:15 UTC - in response to Message 1810835.  


With the Gtx1080 comes a handy installation manual. There is a statement that says that You can and should use a screwdriver to remove a part of the next gtx1080 backplate to enable better cooling.

So, heat may be a problem. The High end pascal cards (Titan and the workstation cards) have a lot lower MHz.

I wonder whether I would get any benefit of removing that part of the backplate on my 1070's. I have one slot spacing between them so they both supposedly get plenty of air input. I wonder whether there would be much of a difference in cooling considering the 1070's are only 150W and the 1080's have 180W in TDP.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1810856 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1810864 - Posted: 20 Aug 2016, 23:27:55 UTC - in response to Message 1810856.  

@Petri

I'm looking at the manual and I see where you talk about removing part of the backplate to allow for better air flow. Unfortunately that's not going to benefit the original GPU as it's only allowing for airflow to an adjacent GPU.

Now, if they you could remove the last 1/3 shroud, the part farthest from video output but closest to fan (which you can but it requires taking the WHOLE gpu apart to remove it) then that would allow better flow from front end. What is there now is fake fins that don't do anything.

Evidently EVGA became aware that the backplate might (and does) hit the fan of the GPU above it in SLI configuration.

Anyway, just a FYI
ID: 1810864 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1810887 - Posted: 21 Aug 2016, 2:07:26 UTC - in response to Message 1810864.  

I was kinda hoping that it meant that there was no PCB in the area opposite the blower and if you removed the backplate section, you would draw air in from both front and back of the cards. That would allow maximum induction into the heat sink fins. I think I've seen that kind of design on AMD cards if I'm not mistaken.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1810887 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1810889 - Posted: 21 Aug 2016, 2:10:23 UTC - in response to Message 1810887.  

If they are reference cards, it has a fake fin. If it's an ACX design then it will vent out the front and back.

Since we are talking about the reference design, short of completely disassembling it to remove the fake fins, not much benefit to removing the back plate
ID: 1810889 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1810894 - Posted: 21 Aug 2016, 2:23:18 UTC - in response to Message 1810889.  

Yes, they are the Nvidia reference design, Founder's Edition. I swore I wasn't going to pay a premium for the new Pascal cards and wait till they started selling near their MSRP. Two months later and absolutely no movement in prices either OEM or AIB. I wanted the blower style cards since I have bad experiences with the AIB twin fan cards when you have two cards next to each other, so I only go with the blower style now. My preferred vendor EVGA still does not have stock ones, either their reference version nor their optimized blower style so for S&G I checked whether the local Best Buy had any 1070's. They did, Nvidia FE's and for list MSRP, same price as EVGA's offerings.... if they had any in stock. So since I was driving into town for shopping anyway, picked them up.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1810894 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19012
Credit: 40,757,560
RAC: 67
United Kingdom
Message 1810906 - Posted: 21 Aug 2016, 3:21:53 UTC - in response to Message 1810832.  

And that would indicate a design flaw, and under Australian Consumer legislation people would (after a lot of time & effort, naturally) be able to get a refund on their purchase as it doesn't meet the claims of the sales literature.


I doubt it, Nvidia GPU's are advertised heavily as devices to improve your gaming experience by speeding up and increasing the resolution available. You have to look fairly hard to find the information that they can also be used as parallel maths processors.

And if you look for Nvidia maths processors it will lead you to their other products such as Quadro, Tesla etc.

Which means that those that build and use computers using graphics cards designed mainly for the gaming industry as dedicated maths processors are using them outside their design specs.
_________________

It would also not be possible to connect the power inputs from the voltage regulators if they are at different voltages. Ignoring the fact it would be bad design, it is no use building circuits with current limit capabilities to save your expensive GPU and then bypass it by allowing more than one voltage regulator to be used.

How many different voltages are required from motherboard regulators for today's computers?
When I started moving into digital development, 6502 and Z80 era, we grabbed a 5V power supply, and as an after thought added -5V and ±12V, as required for external interfaces. You' not find any reference to 3.3V or an adjustable CPU core voltage of nominal 1.35V on early PC's.
ID: 1810906 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1810916 - Posted: 21 Aug 2016, 4:48:12 UTC - in response to Message 1810906.  
Last modified: 21 Aug 2016, 5:21:00 UTC

And that would indicate a design flaw, and under Australian Consumer legislation people would (after a lot of time & effort, naturally) be able to get a refund on their purchase as it doesn't meet the claims of the sales literature.


I doubt it, Nvidia GPU's are advertised heavily as devices to improve your gaming experience by speeding up and increasing the resolution available. You have to look fairly hard to find the information that they can also be used as parallel maths processors.

And they support CUDA, OpenCL & Vulkan, and being designed for overclocking are the claims for my card. Not being able to run at it's designed Boost speeds would be in contradiction to those claims.
The sort of thing that makes lawyers rich.


And if you look for Nvidia maths processors it will lead you to their other products such as Quadro, Tesla etc.

Which means that those that build and use computers using graphics cards designed mainly for the gaming industry as dedicated maths processors are using them outside their design specs.

But not if the cards in question say they support the applications being run- CUDA, OpenCL, Vulkan.
Even if running a business using gaming cards- they need to meet the claims made in advertising literature.
Lawyers, rich.


It would also not be possible to connect the power inputs from the voltage regulators if they are at different voltages. Ignoring the fact it would be bad design, it is no use building circuits with current limit capabilities to save your expensive GPU and then bypass it by allowing more than one voltage regulator to be used.

Nope.
But I suspect what they call "6+2 phases" means they've got 6 output transistors for their switch mode outputs, with another 2 to help share the load of the 6 used in the reference design.
Or it could mean there are 6 individual rails, plus another couple to help out.
I have to admit that using phases in this context makes no sense to me. Phases are an AC thing.

EDIT- just found something useful amongst the noise, from someone that knows what they're talking about.
Low output voltage switchmode supply
Grant
Darwin NT
ID: 1810916 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 11 · Next

Message boards : Number crunching : 1080 underclocking


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.