|Jason 38, From Adelaide, Australia|
Computer Science circa 1995
8 years in Electronics industry
Electronic Engineering (Currently Studying)
Hobbies, my dog, hydroponic tomato growing. big fan of any ageless music and anything electronic too.
|Thoughts about SETI and SETI@home|
|Kepler GPUs for Dedicated Crunching and OC|
Some first time Kepler GPU owners are hitting brick walls when it comes to overclocking them. The new GPU Boost technology included in this architecture works very well when you work 'with' it instead of try to fight it!, here's a rough guide that *might* help you tame the beast. (Warning: OC and tweak at your own risk!)
The boost frequency is controlled by a complex algorithm implemented in hardware involving about 10-15 parameters or so, some not controllable directly. For 'our' purposes we want to:
1) keep below 70 degrees C where that parameter enters the equation, by upping the fan to say a fixed 80%, then
2) set the power target to 132% which lifts that parameter out of the equation,
that leaves the current (not directly under our control), voltage and GPU usage parameters (under our control), so
3) ensure full usage with 2 or 3 tasks per GPU,
4) if temps are still well below 70 C, up the voltage to the max 1.175V (which is still 'lean' for GK104)
5) Use a priority adjusting too such as Ferd's efMer Priority v1.2, or process lasso, or use x41x's (yet to be released) mbcuda.cfg to jack the process priority to 'above normal'
Now you have GPU clock and memory clock offsets (which should still be at +0 for now), Trimming these will be slightly different for every gpu.
Usually at this point, you'd use some sortof artefact scanning program to gradually up those clock offsets until artefacts occur within an hour period, then back off 2 'notches', however I found it easier to use Lunatic's knabench program to run the 'PG set' with a CPU app as reference. When stable, all Q's should be 'Strongly Similar ~99%' to 6.03 or AK, with the V6 PG set. using that option amounts to a 'purpose built' artefact scanner in some ways, and so I'm not ignoring the possibility of using the approach in something more easy to use, while still being dedicated for seti work, in the future.
Easing up (not too much at a time) the GPU clock offset, and repeating the short bench until Q's drop, then backing off, then following the same process again with the memory clock, the core clock backed off, and you should have your maximum stable offsets. With unmodified reference card/cooler, they should be somewhere around +100 core clock & +400 Memory clock, which will lead to a GPU boost clock OC of around 1220-40 MHz.
An important note is that factory reference cards are a bit voltage starved on purpose, even maxxed to 1175mV, to keep within the reference thermal/power/acoustic spec envelope, so pushing beyond these figures can require hardware volt modding & current sensor bypass.. though the silicon is quite capable, cooling, power and current limits aren't really pushed on my card with reference everything... So I see some hardware modding in my card's future.
You do get used to working 'with' the GPU boost mechanism in this way, instead of 'against' it. Underneath, it really isn't any different from max stable OC point of view, but does take some getting used to dynamic clocks.
I'm at on the GTX 680 here:
- power target 132% (lifted out of influence)
- Fan fixed 80%
- Voltage 1175mV
- GPU clock offset +100MHz
- Mem clock Offset +440MHz
- x41x, 3 tasks, abovenormal process priority
- ~62-63 degrees Celsius.
Yielding conservative 1215MHz GPU boost clock during running, which is about 20% above stock unboosted ~1006MHzz. I can watch video and play older games in 3d while crunching, without snoozing Boinc.
|Your feedback on this profile|
|Recommend this profile for User of the Day:
||I like this profile
|Alert administrators to an offensive profile:
||I do not like this profile