Posts by FloridaBear

1) Message boards : Number crunching : Ryzen and Threadripper (Message 2009408)
Posted 26 Aug 2019 by Profile FloridaBear
Post:
A 12 core, 24 thread 3900 with a TDP of 65W now sounds interesting enough for me to definitively wait with buying the hardware part of my new system.


The 3900x has a default TDP of 105W, but will go to 142W without PBO unlock, so about a 40% overhead. If the 3900 is roughly the same (e.g. 40% overhead), that puts it at about 90W max. If I set PPT at 90W on my 3900X, the clocks settle in at about 3.5 GHz. Temps run at about 54C at this speed (with a radiator, YMMV).

On another note, I really like the ability to set the maximum allowed temperature in the BIOS, which makes for pretty worry-free crunching. Currently, I set my max temp to 78C and unlock PBO, which yields the following (all values approximate):

Temp: 78C (remains constant under full load)
PPT: ~155 watts
Clocks: ~3.96 GHz

This is running an NZXT Kraken x62, so short of sub-ambient cooling, this is pretty maxed out for a 3900X.

2) Message boards : Number crunching : question about OC on evga GTX460SE (Message 1144200)
Posted 23 Aug 2011 by Profile FloridaBear
Post:
Just as another data point, I have my Asus GTX 460 1 GB running well at 810/1620/1900@1.06V on the 280.26 drivers. Seeing virtually no downclocks; it used to downclock a bit more. I don't know if the card just needed "breaking in" or what. Anyway, it's been a workhorse so far.
3) Message boards : Number crunching : How do I set up for 2 WU's Per My 460 GPU ? (Message 1142140)
Posted 18 Aug 2011 by Profile FloridaBear
Post:
LOL, you're not crazy. That's exactly the improvement I've documented before when going from 1 at a time to 2 on my GTX 460. Just relax and enjoy the higher RAC.
4) Message boards : Number crunching : Sanity check on my new gpu's (Message 1141001)
Posted 16 Aug 2011 by Profile FloridaBear
Post:
I don't have any direct experience with the GT 520, but they are rated at 155 GFlops. Your wingman on a few recent WUs had a GTX 275, which is rated at about 1010 GFlops. Your runtimes seem to be about 6 times that of the 275, so I'd say they're in the ballpark (you're actually doing a bit better than the flop ratio due to the optimized client). Memory use sounds good as well.

I think you should get roughly 2.3K credits per day per board based on some recent valid WUs.

If you look at power per watt, your GPUs are about 5.4 GFlops per watt (it's a 30 watt GPU). That is still right in the ballpark with most newer cards. The current king is the 560 Ti, at about 7.4 GFlops per watt, and you're not that far off that mark. My GTX 460, for example, is rated at 5.67 GFlops per watt. Of course, it consumes 160 watts, and is rated at 907 GFlops. It should be capable of about 20K per day.

I hope that answers your question...perhaps I got carried away though...
5) Message boards : Number crunching : Stange down clocking on nvidia GPUs running any CUDA BOINC apps (Message 1138550)
Posted 10 Aug 2011 by Profile FloridaBear
Post:
I've been fighting with the downclocking issue on my GTX 460 as well (and with the GTX 260 when it was installed along with the 460). My observations are as follows, but may or may not agree 100% with yours.

    Downclocking only appears to happen with drivers later than 266 (e.g. 275 and 280 series). I have not had it occur with 266.58)

    With the drivers that downclock, I often have it happen when I switch to a game and back or otherwise pause BOINC. The card will downclock normally when in 2D mode with BOINC GPU apps not running, but will not clock back up when BOINC GPU apps restart.

    I do see it sometimes downclock while steadily running BOINC GPU apps, which could be a heat or voltage issue as others have reported.

    As an alternative to rebooting when this happens, a driver reinstall on the fly will get it back to normal clocks (or back to "overclockable" status). For awhile, I just reinstalled the 275 drivers when I experienced downclocking. That got old, so I'm back to 266 for now, even though the 275 and 280 drivers are definitely better overall.

6) Message boards : Number crunching : Comparing WUs to wingman (Message 1135873)
Posted 4 Aug 2011 by Profile FloridaBear
Post:
All a moot point until I can actually download work.
7) Message boards : Number crunching : Comparing WUs to wingman (Message 1135802)
Posted 4 Aug 2011 by Profile FloridaBear
Post:
As an addendum, revisiting the numbers, my 460@770MHz does two shorties in about 3:20; my 260@690MHz does two shorties in about 5:18. They do shorties one at a time in 1:53 and 2:14, respectively.

So basically, it seems the 460 does about 3K less per day doing one at time while the 260 does 3K less doing two at a time. So by putting them both in one machine, it's basically a wash (until such a time as I can specify how many concurrent WU's to do on each GPU).

So while it's true the 260 has lower throughput doing 2 at a time, it's on the order of 19%--not too drastic. The 460 picks up about 13%.

I guess the solution is to get another 460 and pass the 260 down to the kids ;-)
8) Message boards : Number crunching : Comparing WUs to wingman (Message 1135786)
Posted 4 Aug 2011 by Profile FloridaBear
Post:
I thought the 260s could only practiacally crunch 1 WU at a time. That's what I have mine doing and the 460s and 560tis do 2 WUs at a time (some people have them doing 3 but I'm not certain there is any benefit).


I recently added a 460 to my PC that already had a 260, and in order to extract more performance from the 460, I switched to doing 2 WU's at a time in both cards. I did not see any performance degredation on the 260 (i.e. throughput)--it was still doing a theoretical ~15K per day. The 460 was doing ~20K theoretical per day. These were taken by averaging the numbers from about a dozen varied completed WU's.

I do remember on older versions of the optimized clients, the 260's utilization would actually drop when doing 2 at a time, but that is certainly no longer the case. I think it may slightly improve throughput, since you are not wasting time between WU's (unless they both complete at the same time). I'd like to hear from others running 2 at a time on 260s though, it's an interesting topic.
9) Message boards : Number crunching : Goodbye (Message 1135180)
Posted 2 Aug 2011 by Profile FloridaBear
Post:
Actually, I think that low-power idle instructions on the CPU were not always available. Back in the 386 and 486 days, I believe free cycles were actually free--the CPU essentially ran at 100% all the time. I could be wrong on that of course.
10) Message boards : Number crunching : Comparing WUs to wingman (Message 1135010)
Posted 2 Aug 2011 by Profile FloridaBear
Post:
Occasionally I like to compare valid blocks I've been crunching to my wingman to see if my times are better or worse, and try to pinpoint why in either case. Unfortunately, this has become nearly impossible since there is no log anywhere of how many WU's a given GPU is processing simultaneously. I process two blocks at the same time, since I cannot really manage to do 3 on my GTX 260, even though the 460 would handle it. When looking at a Wingman's WU, unless I'm missing something, there's no way to tell how many blocks that GPU was processing concurrently. It's frustrating. Anyone have any ideas?
11) Message boards : Number crunching : Testing beta driver 280.19 on NVIDIA GeForce GTX 570 (Message 1135008)
Posted 2 Aug 2011 by Profile FloridaBear
Post:
I had some weird problems on the 280 beta while running both a GTX 260 and a GTX 460 (lockups, strange downclocking, etc.). I had to roll back. I did see decreased memory consumption on the cards, but very little difference in GPU or CPU utilization.
12) Message boards : Number crunching : Question about CyberPower PR2200 UPS (Message 1132853)
Posted 28 Jul 2011 by Profile FloridaBear
Post:
Currently, I have my UPS plugged into my PC via USB cable (most new USB's do this). Windows 7 recognized it without any driver installation, and my PC acts kind of like a laptop (I have a battery icon in my system tray that shows the charge of the UPS).

I've also configured BOINC to not run on batteries, so if we lose power, BOINC suspends crunching until we're back on AC, and then resumes crunching automatically. Yes, it does involve some crunching downtime, but you don't need quite as high a rating on the UPS, and it will charge back up quicker.

I've tested this and observed it in action when we had a power outage earlier today actually (for a few seconds--long enough for BOINC to suspend and resume). It's quite a nice setup. Also, Windows will automatically do a graceful shutdown when charge runs low on the batteries.

EDIT: It would be cool if BOINC offered a "computing allowed if batteries above xx%" option. But alas, it doesn't yet.
13) Message boards : Number crunching : GTX 460 (Message 1130161)
Posted 20 Jul 2011 by Profile FloridaBear
Post:
Just as another couple of data points:

I have a new ASUS GTX 460 1GB card. I'm running at 805/1610/1900 (core/shader/mem). I have experienced the downclocking issue twice, and that is with the new 275.33 drivers. I have gone to software fan control and shifted the curve a bit right so that the fans run faster for a given temp, keeping temps a bit lower (it now maxes at about 65C).

The performance seems good with the new Lunatics 0.38 code--it just completed two shorties (simultaneously) in 3:46. It definitely improved performance on the GTX 260 as well, and I don't think there's a penalty (if not an improvement) for running two at a time on the 1 GB 260.

I will keep watching for the downclock issue--it may be that 805 is too fast or something.

[EDIT]: Just as an off-topic remark, I noticed that SpeedFan 4.44 final is out, and the "advanced speed control" is now working for my system, where before it did not. In case you're not familiar with that, it allows you to set up a temperature curve to control fan speed. I think it's far better than the old method he used, and I'm now using that successfully for 3 different fans (CPU, NB and case).

[EDIT2]: Whose stats are in my sig? LOL!
14) Message boards : Number crunching : Making Core i7 Quieter (Message 1031596)
Posted 7 Sep 2010 by Profile FloridaBear
Post:
One other thing not discussed here is to lower VCore. That can make a huge difference in the core temps and the cooling requirements. For example, my Core i7 is running at 3.35 GHz but at only 1.094 VCore under load. At idle, this drops back to 0.8 V. Yet, under load my core temps (actual) are at most in the low 70's.

In winter, I go a bit higher to about 3.8GHz with 1.19 volts VCore.

Many others have similar results with this chip. If your BIOS allows this, lower VCore until you notice instabilities, then take it back up a few hundredths.
15) Message boards : Number crunching : 2 retiring (gracefully) - 1 new cruncher arrives (Message 995405)
Posted 10 May 2010 by Profile FloridaBear
Post:
if you bump the voltage to 1.40-1.45 you'll be able to get a bit more out of that CPU


Yikes...on my 920 I don't need more than 1.2V to run it at 3.8 GHz. At a more "summer-friendly" 3.4 GHz, I have it running at 1.094V under load. I found the key with my setup is to make sure the RAM is stable. It required VTT (+250mV) and DIMM boosts (1.65V). Very stable all the way to 4 GHz with those settings.
16) Message boards : Number crunching : i7 and hyperthreading (Message 992464)
Posted 28 Apr 2010 by Profile FloridaBear
Post:
I believe the cores are grouped in twos--i.e. core 0 and core 1 correspond to one physical core, core 2 and core 3 are another, etc. It really doesn't matter if a process runs on core 0 or core 1--it's the same physical core.

Windows will normally spread tasks out among all 8 cores. If you set BOINC to run 50% of CPUs, then manually set affinity for those 4 running tasks such that they're running on cores 0,2,4 and 6, you'll see even core temps and load. Of course the load will only be 50% (according to Windows) since you're not using hyperthreading. But that scenario would be basically equivalent to running without hyperthreading at all.

There must be some advantage to running hyperthreading or there would be no point in offering it in the first place. One way to verify this that I've found is using y-cruncher (a multithreaded pi digit calculator)--benchmarks are significantly faster with HT on. I imagine you would get the same throughput improvement in SETI@Home although it's very difficult to verify.

You're correct in that hyperthreading can only be switched off or on in the BIOS.
17) Message boards : Number crunching : Credit Dispartiy (Message 992036)
Posted 26 Apr 2010 by Profile FloridaBear
Post:
At the other end of the spectrum, GPU's are claiming less than CPU's (for units about 30 credits or less). Think of it this way though--the GPU's are actually doing MORE calculations than CPU's for those larger units. Is it really so bad that they get more credit once in awhile?
18) Message boards : Number crunching : Switching to CP.net - wish me luck! (Message 992032)
Posted 26 Apr 2010 by Profile FloridaBear
Post:
I've switched 50% to World Community Grid. But since they do not have any GPU jobs yet, probably at least 75% of my RAC will still be S@H (I may up my WCG share because of that). They have an intermittent project called The Clean Energy Project that will be searching for favorable organic molecules for solar cells. This will begin phase 2 in June. Their other projects seem worthwhile too.
19) Message boards : Number crunching : What clock speed is your i7 running at? (Message 991121)
Posted 22 Apr 2010 by Profile FloridaBear
Post:
Running at 3.8 GHz on an EVGA X58 3xSLI, VCore at 1.15625 (without VDroop--VCore loaded is 1.18-1.20V). Core temps mid-70s at full load. Air cooling with Cooler Master Hyper N520 with 2 2900 rpm 92mm fans.

Temps are really the limit here--I like to keep things in the 70s and 4GHz plus requires voltage increases and temps in the 80s.
20) Message boards : Number crunching : GPU Load only about 36% (Message 989496)
Posted 16 Apr 2010 by Profile FloridaBear
Post:
Are you dedicating a core to the GPU? My GTX-260 typically has 70-80% utilization on my Core i7 920. 36% seems really low...


Next 20


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.