Crunching advice

Message boards : Number crunching : Crunching advice
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile Careface

Send message
Joined: 6 Jun 03
Posts: 128
Credit: 16,561,684
RAC: 0
New Zealand
Message 1707283 - Posted: 1 Aug 2015, 10:12:50 UTC
Last modified: 1 Aug 2015, 10:17:13 UTC

Hey all, I don't post very often and I recently got back from a pretty long break (sorry!) due to internet issues, but it's now winter here and I'm loving my rig keeping my room toasty :) With that in mind, a few questions on improving output..

Firstly, I'm using GPU only crunching at the moment - partly to measure its output, partly because of power issues (circuit breaker keeps popping when others are home), and partly because I didn't know if it was better to keep a thread/core free to feed the GPU or not.. Thoughts? Better to just go full 8/8 threads on CPU and 2 GPU threads (keeps it at 98-99%)?

Secondly, upgrade advice.. Assuming I can get my power issues sorted, I'd be looking at getting a new GPU or two. I currently have a GTX 660 Ti @ 1202/1555, which looks to give me ~12k RAC at this stage.. It's a bit tricky finding 660 Tis compared to newer models, but I'm not much of a gamer anymore so I don't mind having a mixup and forgoing SLI (I'm running a GA-EX-UD4P, so 3 16x slots), or is it better just to scrap the 660 Ti and just get new gear? If so, I'm seeing the 750 Ti being the current price/performance champ?

Thirdly, based on whether or not we decide to go multi GPU route, what then regarding CPU crunching? Dedicate cores to the GPUs?

Lastly, I can't seem to find a working version of ReScheduler around.. given my ongoing internet issues and only having ~70% uptime, and that there's a limit of 100 in progress tasks per device, was wondering if it was still possible to reschedule WU so the faster devices can run them when things dry up?

EDIT: That reminds me! Is there any way to prioritise shorties to get them done sooner? Previously I would have rescheduled them, or it would have happened if EDF was triggered.. but not sure how things work now with the changes to WU cache ..

Sorry for the long winded post! It's been a long day and I've had a few...

Cheers! :)
ID: 1707283 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13855
Credit: 208,696,464
RAC: 304
Australia
Message 1707288 - Posted: 1 Aug 2015, 10:38:06 UTC - in response to Message 1707283.  

and partly because I didn't know if it was better to keep a thread/core free to feed the GPU or not.. Thoughts?

If crunching AstroPulse yes, otherwise it's not necessary.

or is it better just to scrap the 660 Ti and just get new gear?

IMHO scrap it & use several GTX 750Tis.
A 750Ti will give slightly lower credit, but you can run 3 of them and use less power than 1 GTX 660Ti (at stock speeds. Overclocking significantly increases power consumption for a much smaller increase in performance).

If so, I'm seeing the 750 Ti being the current price/performance champ?

For the price & power consumption it can't be beaten.
For performance the GTX 960/970s lead. The 980Tis are even better, but considerably more power & cost compared to the lesser models.

Thirdly, based on whether or not we decide to go multi GPU route, what then regarding CPU crunching? Dedicate cores to the GPUs?

See very first Q&A.

Lastly, I can't seem to find a working version of ReScheduler around.. given my ongoing internet issues and only having ~70% uptime, and that there's a limit of 100 in progress tasks per device, was wondering if it was still possible to reschedule WU so the faster devices can run them when things dry up?

With Credit New rescheduling causes significant issues with credit & work allocation. Even prior to that I just couldn't see why people bothered.

EDIT: That reminds me! Is there any way to prioritise shorties to get them done sooner? Previously I would have rescheduled them,

Why???
All that matters is they are done before the deadline, and the manager will move them up the queue if it's necessary. If it's not, then why do it???
Grant
Darwin NT
ID: 1707288 · Report as offensive
Profile TimeLord04
Volunteer tester
Avatar

Send message
Joined: 9 Mar 06
Posts: 21140
Credit: 33,933,039
RAC: 23
United States
Message 1707289 - Posted: 1 Aug 2015, 10:39:19 UTC

If you look at my two computers, the one I call Prometheus is using a GTX-750 TI SC; and, the other I call Exeter is using a GTX-760. Both systems are only Dual Core; so, I choose NOT to crunch on CPU, but ONLY GPU.

I'm running Lunatics 0.43b Optimized App on both systems; but, just found out that Prometheus, (due to other hardware limitations on the Motherboard), CANNOT go above NVIDIA Driver 337.88. Exeter is now on Driver 353.30. The newer Drivers, (above 347.88), use OpenCL 1.2 for AP tasks. If you use Lunatics and want to use the latest Drivers, you MUST use Lunatics 0.43b.

As for keeping cores free to feed the GPUs; YES, you MUST keep 1 core free per GPU in the system(s). Also, with GTX-750's and above, if you make use of an "app_config.xml" file, you can make it so that your GPUs crunch 2 or 3 WUs at a time, thus, maximizing the use of each GPU. I use an "app_config.xml" file on each system; but, because my systems are limited to one GPU per system, (limited by the Motherboards), my "app_config.xml" will NOT help you. Someone else more familiar with multiple GPUs per system will be able to help you create a proper "app_config.xml" file for your system(s).


TL
TimeLord04
Have TARDIS, will travel...
Come along K-9!
Join Calm Chaos
ID: 1707289 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13855
Credit: 208,696,464
RAC: 304
Australia
Message 1707290 - Posted: 1 Aug 2015, 10:52:55 UTC - in response to Message 1707289.  

As for keeping cores free to feed the GPUs; YES, you MUST keep 1 core free per GPU in the system(s).

Only if doing AP.
I do MB only, and have tried leaving a core free- made no difference at all to GPU crunching times, but lost the benefit of that core's crunching. Hence no free core.
Grant
Darwin NT
ID: 1707290 · Report as offensive
Profile TimeLord04
Volunteer tester
Avatar

Send message
Joined: 9 Mar 06
Posts: 21140
Credit: 33,933,039
RAC: 23
United States
Message 1707296 - Posted: 1 Aug 2015, 11:10:41 UTC - in response to Message 1707290.  

As for keeping cores free to feed the GPUs; YES, you MUST keep 1 core free per GPU in the system(s).

Only if doing AP.
I do MB only, and have tried leaving a core free- made no difference at all to GPU crunching times, but lost the benefit of that core's crunching. Hence no free core.

Yes, I should have clarified that it is for AP... I crunch both AP and MB. :-)


TL
TimeLord04
Have TARDIS, will travel...
Come along K-9!
Join Calm Chaos
ID: 1707296 · Report as offensive
Profile Careface

Send message
Joined: 6 Jun 03
Posts: 128
Credit: 16,561,684
RAC: 0
New Zealand
Message 1707297 - Posted: 1 Aug 2015, 11:21:28 UTC

Thanks heaps for the replies :) I see your point with not needing rescheduler. Was just as a backup for when things get dry due to my rubbish internet connectivity. from what you're saying, I might be looking at either 960 or 970 as it's been a while since I've put a bit of love into my rig. I'm liking the extra cores of the 970, but the increase in price.. hmm.

You bring up a good point with AP crunching. I hear people talking about AP WU quite highly, so I was wondering - I currently only have MB set up. I just reinstalled Lunatics apps to include CPU crunching, and now for some reason my GPU is only crunching 1 WU (I changed the app_info.xml to 0.5 CUDA count too).. Guess I'll figure that out soon :) it's 11:20pm here, getting a bit tired haha.
ID: 1707297 · Report as offensive
Profile TimeLord04
Volunteer tester
Avatar

Send message
Joined: 9 Mar 06
Posts: 21140
Credit: 33,933,039
RAC: 23
United States
Message 1707307 - Posted: 1 Aug 2015, 12:18:57 UTC - in response to Message 1707297.  

Thanks heaps for the replies :) I see your point with not needing rescheduler. Was just as a backup for when things get dry due to my rubbish internet connectivity. from what you're saying, I might be looking at either 960 or 970 as it's been a while since I've put a bit of love into my rig. I'm liking the extra cores of the 970, but the increase in price.. hmm.

You bring up a good point with AP crunching. I hear people talking about AP WU quite highly, so I was wondering - I currently only have MB set up. I just reinstalled Lunatics apps to include CPU crunching, and now for some reason my GPU is only crunching 1 WU (I changed the app_info.xml to 0.5 CUDA count too).. Guess I'll figure that out soon :) it's 11:20pm here, getting a bit tired haha.

Using an "app_config.xml" makes it easier to modify how many WUs per card you want to crunch. Plus, when installing the next new Lunatics version, you won't have to go back and re-edit the "app_info.xml" file... Set your "app_config.xml" once, and let it be.

Again, however, my "app_config.xml" file is set to work only on one GPU per system; because of my motherboard limitations on each system. Someone else more familiar with multi-GPU systems will be able to help you create an "app_config.xml" file for your system(s).

Perhaps Zalster, Joe, or anyone but me... ;-)


TL
TimeLord04
Have TARDIS, will travel...
Come along K-9!
Join Calm Chaos
ID: 1707307 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1707332 - Posted: 1 Aug 2015, 13:53:26 UTC - in response to Message 1707307.  
Last modified: 1 Aug 2015, 14:00:58 UTC

Good Morning,

Sorry, for the delay. Busy evening. Hmmm...

OK, My 2 cents. If you are going to look at new GPUs then you should plan for the future.

750, 960, 970s all have limitations of RAM on those cards. For now, that is not a problem. But if you scan the threads and listen to Jason and some of the others, eventually when they update the applications for work, having larger RAM on the GPU will benefit you more when trying to process more work units per card.

So with that in mind

Titan X > GTX 980 Ti > GTX 980s > GTX 970s etc.....

Why is this? Jason can give you the technobabble but basically the newer applications will (hopefully) use more of the RAM and be faster than the current applications but will only allow maybe 2-3 work units per card for maximum efficiency.


970s suffer from having only 3.5 GB of ram with another 0.5 it can "access" when needed. This bottleneck limits the number of work units it can run at any time.

So the question becomes. Do you want fast and efficient for now. In which case becomes the lower GPUs. Or do you want to plan for the future, in which case I would suggest 980s or above. Of course $$$ plays a major role here as well. This endeavor can get pretty expensive quickly.

I tell people go with what you can comfortably afford. You can always upgrade the GPUs later.


Now, as far as the app_config.xml

There have been some change lately as to how it looks. Someone else gave me this but probably should be what you use, keeping in mind future apps will be coming out.


<app_config>
<app_version>
<app_name>astropulse_v7</app_name>
<plan_class>opencl_nvidia_100</plan_class>
<avg_ncpus>0.35</avg_ncpus>
<ngpus>0.5</ngpus>
</app_version>
<app_version>
<app_name>setiathome_v7</app_name>
<plan_class>cuda50</plan_class>
<avg_ncpus>.35</avg_ncpus>
<ngpus>0.5</ngpus>
</app_version>
</app_config>

This is how mine kind of looks. Not exactly the same as my "values" differ per machine.

The decision on how many work units to run per GPU should be based on testing to find where the "sweet spot" is. Just because a card can run 3 or even 4 work units, doesn't mean you should. You need to look at how long it takes to run those work units and divide by the number of "instances" of work

example (these numbers are made up, ie not real but use for explaination)

Running 3 work units might take 30 minutes 30/3 =10 minutes
Running 2 work units might take 18 minutes 18/2 = 9 minutes

So you can see that while you can run 3 units at a time, it actually is taking long. Why you ask, it's just 1 minute

in 60 minutes you could do 6 work units vs 4 you say...But what about in 2 hours? 12 units vs 8 right? Wrong 12 units vs 12 cause you are doing more per hour with 2 work units per card, eventually 2 per card will exceed the 3 per card shortly and when looked over 24 hours you will see it will be more productive than the 3 per card.

So how do you figure out how many per card? Lots of testing. I suggest you can test yourself or if you want to avoid the hassle, just ask.

So if you want 2 work units per card you change it to

<ngpus>0.5</ngpus>

if you want 3 per card

<ngpus>0.33</ngpus>

Can you go too high? Yes, either the system will crash, lock up or... wait for it... Blue screen with memory dump.....

Now, how do we figure how much CPU to give to each work unit? Here you will hear all sorts of numbers thrown at you and reasons why. What I see is the following. APs when used with a good commandline can get CPU usage down to 12-15% of a CPU core. If you don't use a commandline on the APs then CPU usage will be alot higher and you will need to adjust for that. But for now, lets say you do so I'd factor in at least 20% of a core. I tend to be more generous and factor 0.35 of a CPU core. Why? Because I'm lazy.

When you look at Multibeams work units, I find that they generally tend to use 0.34 of a CPU core. Hence 0.35 So I just leave the value at that as it's easy for me to remember for both APs and MBs. You make those changes here

<avg_ncpus>.35</avg_ncpus>

in the above app_config.xml

What you really should be doing is looking at how many CPU cores you have.

In a 4 core system, you want to leave 1 core for the OS of the computer. If you are running more than 1 GPU then you need another core to help support and feed the GPUs. Therefore you need 2 free cores. 4-2 = 2

So that leaves 2 core for the GPUs. Lets say you have 3 GPUs and want to run 3 per card. 2 core/9 work units per card = 0.222 core for each work unit

As you can see this is less than the 0.35 I mentioned Will they still run, yes because in reality you have 4 cores 4/9 = 0.44 But you might have some lagging or unresponsiveness. Remember we talked about leaving 2 cores free? Well not in use. But your computer won't know the difference and will try and use those "reserved" cores

Just about now someone will jump in and say,you can't calculate system usage this way. Probably, yes I know, but trying to explain floating usage and shared FP tends to give people headaches. It's like trying to describe Organic chemistry models, if you can't do it in 3 dimensions in your head, it's hard to visualize them...

Where was I, oh yeah, talking too much...

Figure out a value and stick with it then use that as a guide to calculate how much resources you have to play with. Sorry for the lecture.


Zalster
ID: 1707332 · Report as offensive
Profile Careface

Send message
Joined: 6 Jun 03
Posts: 128
Credit: 16,561,684
RAC: 0
New Zealand
Message 1707515 - Posted: 2 Aug 2015, 1:31:46 UTC
Last modified: 2 Aug 2015, 2:06:38 UTC

Thanks all for the help!

Thanks TL, I didn't know about the app_config file (only app_info) but that makes things much easier! Previously with each update I was just replacing <count>1</count> with <count>0.5</count> in notepad lol. So app_config somewhat overrides _info? Good to know :) Thanks!

Don't apologise for the lecture, Zalster :) that's actually perfectly what I was after. I had a sleep on it, and I'm now pretty sure I'd rather look to the future.. I got my current 660 Ti when it was only a few months old, so it's time for an upgrade :) I've been looking at the 970s, but I was a bit concerned with the 3.5gb+512mb 'issue'.. I'f I had ~$1100 (NZD) to drop on a 980 Ti, I would, but after looking at TDP, # of CUDA cores, heat output etc, I'm still looking at the 970s(unless the 980s come down in price soon..) so I could simply add a few more in when they get cheaper :)

For Grant, I had a test of your idea of not leaving CPU cores to feed the GPU, and my tests showed that if I leave 8/8 CPU threads and 2/2 GPU threads running, both GPU-z and MSI AfterBurner show my GPU usage spike from 50-97%, avg ~84% overnight.. I disabled a thread to dedicate to GPU, and while this helped, it still spiked between 70-97%.. I disabled 2 threads (sadly not on the same core, but I can fix this) and GPU went back to up 98-99% constant. Whether or not the extra ~10% GPU crunch time will offset the loss of 2 threads (i7 960 @ 3.6ghz; showing ~3.7GFLOP/s per core) I guess I'll know soon enough once RAC stabilises.

Again, thanks all for the help! Very much appreciated :)

Crunch on!

EDIT: I was thinking about getting the GTX 970 Mini-ITX version, mostly because it's cheaper and has a lower TDP so it shouldn't pop my circuit breakers lol... Thoughts? :)
ID: 1707515 · Report as offensive
Profile Careface

Send message
Joined: 6 Jun 03
Posts: 128
Credit: 16,561,684
RAC: 0
New Zealand
Message 1707602 - Posted: 2 Aug 2015, 5:59:10 UTC
Last modified: 2 Aug 2015, 6:01:51 UTC

Hmm it won't let me edit my quote, but after having a look around (and in regards to your feeback, Looks like I'm definitely keen on the 970. Ordinarily I go with Gigabyte since they've been super good with me - any recommendations regarding brand?

I'll be overclocking regardless, so pretty much price/high performance is key :)

Seriously though, thanks for all the help :) I've been crunching since 1999, but lost my account number lol.. So I started again in 2003 haha :)

Thanks heaps! :)

EDIT: Horrible spelling!
ID: 1707602 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1707604 - Posted: 2 Aug 2015, 6:15:31 UTC - in response to Message 1707602.  

I prefer EVGA on GPUs. I've had to RMA at least 4 over the years and they have never given me any trouble with exchange them for new ones. I know that sounds bad but considering how many GPUs I've gone thru, it's not really that high of a percentage, and I run my cards hard.

You only get 30 minutes in which to edit your quotes.

Happy Crunching..

Zalster
ID: 1707604 · Report as offensive
Profile TimeLord04
Volunteer tester
Avatar

Send message
Joined: 9 Mar 06
Posts: 21140
Credit: 33,933,039
RAC: 23
United States
Message 1707711 - Posted: 2 Aug 2015, 15:55:56 UTC - in response to Message 1707602.  
Last modified: 2 Aug 2015, 15:59:15 UTC

Hmm it won't let me edit my quote, but after having a look around (and in regards to your feeback, Looks like I'm definitely keen on the 970. Ordinarily I go with Gigabyte since they've been super good with me - any recommendations regarding brand?

I'll be overclocking regardless, so pretty much price/high performance is key :)

Seriously though, thanks for all the help :) I've been crunching since 1999, but lost my account number lol.. So I started again in 2003 haha :)

Thanks heaps! :)

EDIT: Horrible spelling!

Like Zalster, I'm strictly an EVGA guy. Through them, regardless of what retailer you buy your card through, EVGA offers a 10 Year Warranty on their cards. You pay $40+ Dollars for the Warranty, and then pay an additional $40+ Dollars for Overnight shipping of the replacement card. Then, should you have any trouble with your 970 over 10 Years, they will replace it. If you make use of the Warranty, you can Repurchase the replacement Warranty on your replacement/new card within 30 days of getting it.

I haven't had to make use of this Warranty, yet, my GTX-760 is only just over on year old, now. I hope to be using it for quite some time to come.


TL

[EDIT]

My second GPU, the GTX-750 TI, I got used; so, no warranty... But, it is an EVGA, so it should last a good long time.
TimeLord04
Have TARDIS, will travel...
Come along K-9!
Join Calm Chaos
ID: 1707711 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14679
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1707729 - Posted: 2 Aug 2015, 16:52:48 UTC - in response to Message 1707604.  

You only get 30 minutes in which to edit your quotes.

That would be one hour.
ID: 1707729 · Report as offensive
Profile Careface

Send message
Joined: 6 Jun 03
Posts: 128
Credit: 16,561,684
RAC: 0
New Zealand
Message 1708422 - Posted: 4 Aug 2015, 7:41:41 UTC

Thanks everyone for the replies; you've definitely helped me decide what I'll do with my rig. Now I just have to source everything, and then compare it to my current. I'll be sure to post when I have updates!

Crunch on! :)
ID: 1708422 · Report as offensive
Profile Zombu2
Volunteer tester

Send message
Joined: 24 Feb 01
Posts: 1615
Credit: 49,315,423
RAC: 0
United States
Message 1708457 - Posted: 4 Aug 2015, 9:29:53 UTC

ijust had to rma a evga gtx 480 with evga these cards had a lifetime warranty
i called them they gave me no grief aprooved my rma and told me they do not have any of those in stock anymore ... lowest replacement would be a 780.....now i'm gonna be a proud owner of another 780 with lifetime warranty hrhr

I burned down one of my 580's overcocking and overvolting .. that card literally cought fire on the vrm loads of smoke etc etc
called EVGA at 3am some guy picked up the phone and said thank you for calling EVGA whats up ..... i said hey man i lit my card on fire ... he said ah aright how high did you get it to clock ? (i have to write that down he said)

either way to make a long story short a day later i had my new card and a box to send the old one back with

if you want to get a new card look no further EVGA is the best company to get it from
I came down with a bad case of i don't give a crap
ID: 1708457 · Report as offensive
Profile Careface

Send message
Joined: 6 Jun 03
Posts: 128
Credit: 16,561,684
RAC: 0
New Zealand
Message 1710594 - Posted: 9 Aug 2015, 22:10:46 UTC

Thanks all for the replies! In the end I just bought a EVGA GTX970, and made sure that it had the ACX2.0 cooler since it'll be put in the case alongside the GTX 660 Ti..

@Zombu2 - thanks for the advice :) while I've heard really good things about EVGA, this is the first card I'll have bought from them. Unfortunately, the StepUp program doesn't seem to be available in my country (New Zealand) so that option might be out :(

Hoping to break 30k RAC with the rig, but time will tell!

Crunch on!
ID: 1710594 · Report as offensive
Profile Zombu2
Volunteer tester

Send message
Joined: 24 Feb 01
Posts: 1615
Credit: 49,315,423
RAC: 0
United States
Message 1710595 - Posted: 9 Aug 2015, 22:22:56 UTC - in response to Message 1710594.  

Thanks all for the replies! In the end I just bought a EVGA GTX970, and made sure that it had the ACX2.0 cooler since it'll be put in the case alongside the GTX 660 Ti..

@Zombu2 - thanks for the advice :) while I've heard really good things about EVGA, this is the first card I'll have bought from them. Unfortunately, the StepUp program doesn't seem to be available in my country (New Zealand) so that option might be out :(

Hoping to break 30k RAC with the rig, but time will tell!

Crunch on!


Check out my machines and you can get a aproximate on what the cards will do i got anything from 750ti to 980's keep an eye on it since most cards just got started
I came down with a bad case of i don't give a crap
ID: 1710595 · Report as offensive
Profile Careface

Send message
Joined: 6 Jun 03
Posts: 128
Credit: 16,561,684
RAC: 0
New Zealand
Message 1710606 - Posted: 9 Aug 2015, 23:36:10 UTC - in response to Message 1710595.  
Last modified: 9 Aug 2015, 23:39:09 UTC

Awesome, will do mate :) my rig is still stabilising too, but nearly there I think. By the time the 970 arrives (hopefully Thursday/Friday), it should be stable so I can see what the new card adds to it :)

Crunch on!

EDIT: Sweet, got my 1% badge :) I thought I might have needed to wait for the new GPU, but the extra wee bit I squeezed out of my 660 Ti just bumped me over the mark.. Woo! Beers (and non alcoholics) on me! :)
ID: 1710606 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13855
Credit: 208,696,464
RAC: 304
Australia
Message 1710722 - Posted: 10 Aug 2015, 6:48:36 UTC - in response to Message 1710606.  

Awesome, will do mate :) my rig is still stabilising too, but nearly there I think. By the time the 970 arrives (hopefully Thursday/Friday), it should be stable so I can see what the new card adds to it :)

If there are no server side issues, it generally takes about 6-8 weeks for RAC to end up around it's new normal range after making significant hardware changes.
Grant
Darwin NT
ID: 1710722 · Report as offensive
Profile Careface

Send message
Joined: 6 Jun 03
Posts: 128
Credit: 16,561,684
RAC: 0
New Zealand
Message 1712230 - Posted: 13 Aug 2015, 10:43:41 UTC

Yay the 970 arrived! :) Updated drivers to latest as 312.xx didn't seem to support it, now it's crunching away :) It's boosting to 1265mhz straight out of the box, pretty happy about that! Wonder how she will OC? :D

Ran into some pretty silly problems along the way, though.. So, I have a Silverstone DA850 modular PSU, had it for a good 7-8years; couldn't find the second dual 6pin cable.. The card came with 2 dual molex->6pin cables, so 4 molex free needed. Sweet, I find the molex modual, plug it in... only has 3 molex and a FDD connector.. Oh well, I can sort that tomorrow at a PC store, thought I may as well put the cards in anyway, save me having to do it tomorrow. But no, the 970 doesn't fit in PCI-e slot 1 because of the SATA connectors at the edge of the board (i7 960 on GA-EX58-UD4P). But the 660 Ti's fans touches the exposed RAM chips on the 970 if it's in slot 2.. Now, the UV acrylic case I bought 11 years ago obviously wasn't designed with triple-SLI in mind :p neither the 660 Ti nor the 970 will go in slot 3. To further complicate things down the track, I have a PCI-e 1x SoundBlaster Z to wiggle around somewhere.. It won't fit in slot 1 because it hits the mobo's MCH10 HSF and heatpipe, and it won't fit between the 970 and 660 Ti if they're in slot 1 and 2..

Sooo... I pulled the 660 Ti out, 970 in slot 2, SoundBlaster in slot 2. It sucks, but it will have to do for now until I can figure out how best to fix it haha. I *think* my RAC will basically be plateauing at ~18000-19000 anyway I think, roughly given the average daily credit over the 27 days I've been crunching on the 660 Ti, and taking into account the increase I saw when I added CPU crunching, and also OCing the card further... So it will be interesting to see what happens to my RAC at stock boost.. It's nearly 11pm here and as much as I *really* want to stay up and play with this beast, unfortunately, work on the 'morrow :(

I've noticed that over the last couple hours or so I've been crunching, average GPU load is 95%, and while different AR WU will give varying levels of load..HAR/VHAR aka "shorties" are usually quite spiky in GPU load, rarely if ever having long patches of consistent high GPU load sees them constantly bounce between 97-98%, with the occasional 96% and 99% popping up. The yummy long WUs with the 0.38-0.44 AR that are long and give good credit keep the load at 98-99%, with the odd dip to 97%.. But I've never seen this much variation before; I think it's going to trend down soon too as I'm seeing quite a few 93-94%. This is running 2 WU at a time, btw. So obviously I'd be debating doing 3 WU at a time..

I really have no idea; memory controller load is at 80%+ (which is ~11% higher than the 660 Ti), so I'd imagine adding a 3rd in there would just cause congestion and decrease output.. Thoughts? Otherwise I have a good few weeks ahead of me testing things and waiting for RAC to stabilise lol . The temps at stock boost are staying around 66-67C; which means the fan is on ~11% and I can't hear a thing. Obviously this will all change when I OC it and change the fan profile (I did give 100% a go though :p just for funsies~ boy do they motor! lol), but I was still quite impressed with it; hopefully this means there will be quite a bit of headroom for OCing!

Oh, regarding the Xperia Z crunching - I still haven't figured out a way around the battery problem. Using 2 cores and setting it to 1512mhz (max, stock) while wall charging will charge very slowly, but it will charge. I had it plugged in all night last, started with ~34%, ended with ~71%, and all day at work today and it added ~9% or so as it was in very ocassional use. On the plus side, in the almost 3 days since it's been crunching, it's got 522 credit and 47 RAC... go go! :D

Crunch on!
ID: 1712230 · Report as offensive
1 · 2 · Next

Message boards : Number crunching : Crunching advice


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.