Message boards :
Number crunching :
GTX660/Ti Owners, just a quick question.
Message board moderation
Author | Message |
---|---|
Wiggo Send message Joined: 24 Jan 00 Posts: 34854 Credit: 261,360,520 RAC: 489 |
I know that you these can do 3 w/u's at a time on these for the best output but what I want to know is; How does this effect desktop usage while having up to 3-4 browser windows open, each with several tabs open in each, and a few other low priority programs running (this is my everyday workhorse and I don't need any lag)? Now when I get into using the heavy stuff I just suspend work and let that other work use the whole system's hardware but I've noticed that I'm not using anywhere near the resources ATM than my GTX5xx's use while running 2 each. P.S. I know that I'm still using x41g but I'm taking this 1 step at a time. Cheers. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13746 Credit: 208,696,464 RAC: 304 |
I ran 3 WUs at a time on my GTX460 & then the GTX560Ti (both 1GB VRAM) with no problems on my daily use computer. The only time i did run in to issues was when it started to run out of video memory- then everything would come to a screaming halt, and barely crawl along till i could close a few windows. That's why went back to only 2 WUs at a time on that system. As long as you've got 2GB of VRAM i reckon it should run OK, even with heavy usage. If that usage pushes up the memory controller load towards 100% you might have problems, but i never had that occur, just running out of VRAM. Grant Darwin NT |
Chris Oliver Send message Joined: 4 Jul 99 Posts: 72 Credit: 134,288,250 RAC: 15 |
You can run 3 WU at a time however the cards processing output is weak compared to the 460 and 560. Expect 20 to 30 minutes completion time on work. There is very little lag for doing other stuff whilst processing unless you start watching fullscreen videos etc. |
Helsionium Send message Joined: 24 Dec 06 Posts: 156 Credit: 86,214,817 RAC: 43 |
I'm using x41zc_cuda50 and the AP nVidia OpenCL application on my only machine (GTX 560 Ti + GTX 660) so my results aren't entirely comparable with your situation, but here's what I found out: - Running 3 WUs is possible, but it's not worth it. The GPUs are maxed out with just 2 WUs anyway. With 3 WUs, there was somewhat higher power usage and temperature and occasional slowdowns. There was no noticeable positive effect on processsing throughput. - With 2 WUs some moderately GPU-intensive programs still work without major problems (e.g. Google Earth, HD video), but with 3 WUs they wouldn't, even if there was still enough VRAM free. - If any CPU-intensive task runs at the same or higher priority than the GPU tasks, this can cause massive slowdown. So I run the GPU tasks in high priority AND make sure no other CPU-intesive tasks run in high priority. But keep in mind that this (except that last point) might be different when using x41g. |
Wiggo Send message Joined: 24 Jan 00 Posts: 34854 Credit: 261,360,520 RAC: 489 |
Video memory isn't a problem as I'm using less than 600MB out of 2GB, the memory controller rarely sees over 55% usage, GPU load averages 75-80% plus I'm using no more than 75% of their total power requirements. But these can't really be compared to their older brothers as I've found out over the last 2 days but I do know that there are those here who do have had these cards for a while now could shed a little light on this question for me. ;-) ATM I'm just letting them do their thing and making sure that they're very stable though I do plan on making the change to 3 in the next day or so at the rate they're going to find out for myself but I wouldn't mind a little bit of personal experience feedback from those that have them. Oh, BTW these are clocked at 1084MHz GPU and 6010MHz memory. Cheers. |
Wiggo Send message Joined: 24 Jan 00 Posts: 34854 Credit: 261,360,520 RAC: 489 |
You can run 3 WU at a time however the cards processing output is weak compared to the 460 and 560. Expect 20 to 30 minutes completion time on work. There is very little lag for doing other stuff whilst processing unless you start watching fullscreen videos etc. Ok now this is the feedback I'm looking for. Yes I know that they won't compare to a GTX560 Ti but I just replaced my 560 Ti with a pair of 660's which gets rid of a thermal headache and some of my other software works better with 2 cards in my systems than just 1. Thanks for the feedback. Cheers. |
Wiggo Send message Joined: 24 Jan 00 Posts: 34854 Credit: 261,360,520 RAC: 489 |
I'm using x41zc_cuda50 and the AP nVidia OpenCL application on my only machine (GTX 560 Ti + GTX 660) so my results aren't entirely comparable with your situation, but here's what I found out: Yes, as you said,".. so my results aren't entirely comparable...", as the 560's architecture isn't the same as its newer brother's so for you doing 3 on each would have the 560 holding things back and at 2 holding the 660 back plus at this stage I have no interest in running OpenCL apps as yet but thanks anyway. Cheers. |
arkayn Send message Joined: 14 May 99 Posts: 4438 Credit: 55,006,323 RAC: 0 |
|
Paul D Harris Send message Joined: 1 Dec 99 Posts: 1122 Credit: 33,600,005 RAC: 0 |
Also I think the temps has a lot to do with the video also do keep the temps down. |
tbret Send message Joined: 28 May 99 Posts: 3380 Credit: 296,162,071 RAC: 40 |
I know that you these can do 3 w/u's at a time on these for the best output but what I want to know is; Originally I didn't want to answer you because I wasn't sure of the correct answer. I'm more confident, now. With either my 6 or 4 core Phenom CPUs, running 3 wu at a time on my 660Tis, I don't notice any desktop problems running Rhapsody (streaming audio), Excel, and one or more instances of Firefox. But it may be significant that I'm not running any CPU tasks. None. The delay in replying has been that I was running 2wu per card and only recently increased to 3. I quit that a few days ago... I've been running Einstein for a few days and I'm pretty-sure I'm feeling a decrease in responsiveness running 3 at a time over there (again, no CPU work). <even with the TCP fix I found computers that ran out of work, RAC dropping, etc, so I've switched projects for a while> I really think the "problem" with system responsiveness is the use of the SYSTEM RAM with Einstein and the traffic across that bus. Since you are using Jason's optimized SETI apps and leave the GPU at "below normal" priority, I don't think you'll notice that SETI is even running. Jason did a spectacular job of getting the work to the GPU so you don't have heavy traffic to CPU and system RAM. Updating to x41c will do you as much or more good (increase production) as running 3 at a time using x41g. |
Wiggo Send message Joined: 24 Jan 00 Posts: 34854 Credit: 261,360,520 RAC: 489 |
Thanks for the input people. :-) ATM I'm holding off going to 3 until the Q6600's rapidly rising RAC levels out a bit and the relocation of servers is completed and running smoothly. Once that's done and I make my finally conclusions with the old CUDA app, I'll then upgrade the CUDA app and repeat the process all over again. :-D Cheers. |
William Send message Joined: 14 Feb 13 Posts: 2037 Credit: 17,689,662 RAC: 0 |
slightly off topic. Your most noticiable difference, when going from x41g to x41zc will be that you can use the cuda 5.0 version - there's a marked increase in speed for Keplers with Cuda 5.0. So your sweet spot may change. I've only ever paid attention to the memory footprint on small cards. If anything it got smaller as versions progressed. Can't remember if there is a difference between x41g and x41zc in that regard and I suspect it will look different on a Kepler anyway. priority of x41zc: you shouldn't notice at default. The bigger 670 (e.g. this host) can run 4 at a time at above normal without breaking a sweat and handle IE with a lot of open tabs at the same time and never notice a thing. If you have two cards you can use the advanced configuration to have the one driving the display at below and the spare one at above priority. If you have CPU/GPU intense programs it might be best to use boinc's exclude option to stop crunching [iirc advanced options allow you to only suspend GPU]. And last but not least, it will all change with v7 - the above mentioned host may be able to run 4 v6 at a time, but v7 takes a dip above 2 already - especially if you don't run exclusively seti. A person who won't read has no advantage over one who can't read. (Mark Twain) |
Wiggo Send message Joined: 24 Jan 00 Posts: 34854 Credit: 261,360,520 RAC: 489 |
After a week of running 3 w/u's per card the rig seem to be going very well and is quickly catching back up to the RAC it had before the server move running 2, plus it has also passed my 2500K rig (dual GTX550Ti's) with the amount of work returned per day. The Q6600 runs 10 w/u's at a time while the 2500K does 8 and though the 2500K itself flogs the Q6600 the 660's certainly negate that big gap. No noticeable lag with that setting either which is very good as this is my daily workhorse has dual monitors with at least 2 browser windows open at any 1 time and at least 6 tabs open on the main 2 ones plus 3-4 other program windows thrown into the mix (all depends on what I'm doing that day) but I do stop GPU crunching when I do any video, picture or sound work (under heavy video work I also stop CPU work but that only happens when my 2500K is busy doing similar work). Memory usage on both cards sits around 830MB while the GPU load sits around 80-85% and the memory controller load varies from 60-75%. If I remember I'll post back in about 3 weeks time with a further progress report. Cheers. |
Wiggo Send message Joined: 24 Jan 00 Posts: 34854 Credit: 261,360,520 RAC: 489 |
I've found that feeding the two GTX660's 3 w/u's each I had to free up a CPU core which brought each card up to the equivalent of three low power (down clocked version that doesn't require external power) 9800GT's. 2 days ago I updated from x41g to x41zc. The increase in performance is very noticeable now making a card the equivalent of three 9800GTX+'s. In the end I'm very pleased with the results of the upgrade. Cheers. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.