Message boards :
Number crunching :
You have to love early Xmas and B'day presents (even if you have to pay for them yourself).
Message board moderation
Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · Next
| Author | Message |
|---|---|
Wiggo Send message Joined: 24 Jan 00 Posts: 38452 Credit: 261,360,520 RAC: 489
|
With all the GBT work around I was still maintaining a RAC of just above 105K, but now with this spell of AP work that's back over 109K again. Cheers. |
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628
|
Wiggo, +1 :) |
Wiggo Send message Joined: 24 Jan 00 Posts: 38452 Credit: 261,360,520 RAC: 489
|
Even with all the GBT work around these days the recent AP frenzy has pushed my RAC to over 112K and my highest ever (not to bad for an average 810W of constant winter time warmth). Cheers. |
|
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13987 Credit: 208,696,464 RAC: 304
|
I know these are not new, but they can be half the cost of a 1070, and EVGA said their 970 cards use 170w, very close to the 1070's 150w. Performance wise the GTX 970 comes in just behind the GTX 1060s, which use about 50W less power. Cheaper purchase v higher running costs. Grant Darwin NT |
Tom M Send message Joined: 28 Nov 02 Posts: 5126 Credit: 276,046,078 RAC: 462 |
I left off the -tt 1500 bit, but that noticeably cut around 1min 30secs off the 1060's already short Guppie times while at the same time cutting from the same amount of time from the 660's isn't quite so apparent though still an improvement. Wiggo, I am unclear. Did dropping the -tt 1500 cut the "1 min 30 secs off" or did adding it? I have a couple of 1060's in two different boxes and I am always interested in "faster" :) Thanks, Tom A proud member of the OFA (Old Farts Association). |
Wiggo Send message Joined: 24 Jan 00 Posts: 38452 Credit: 261,360,520 RAC: 489
|
I left off the -tt 1500 bit, but that noticeably cut around 1min 30secs off the 1060's already short Guppie times while at the same time cutting from the same amount of time from the 660's isn't quite so apparent though still an improvement. I never implemented the -tt 1500 bit to start with and the lower time came just from implementing the rest of the cmdline over plain stock Tom. ;-) Cheers. |
Wiggo Send message Joined: 24 Jan 00 Posts: 38452 Credit: 261,360,520 RAC: 489
|
With the temporary addition of an old C2D E6300 w/ a 550Ti (for extra heating and seeing how the latest version of BOINC runs at stock) my RAC is well over 116K now. I must say that the latest BOINC version is even worse than the version I tried a few years back and I had to manually reserve a CPU core so that everything stays happy (this last bit should be done by default when running SETI with a GPU as I can see that without it it will bring a lot of rigs to their knees and likely make new members give up real quick). Cheers. |
betreger ![]() Send message Joined: 29 Jun 99 Posts: 11451 Credit: 29,581,041 RAC: 66
|
My 2nd 1060 arrived yesterday @2:30 pm and was up and running an hour later. So far so good. It is set to run 50/50 Seti and Einstein, but for some reason it's only running Einstein atm. Boinc will Seti it's fair share hopefully soon. |
Tom M Send message Joined: 28 Nov 02 Posts: 5126 Credit: 276,046,078 RAC: 462 |
I must say that the latest BOINC version is even worse than the version I tried a few years back and I had to manually reserve a CPU core so that everything stays happy (this last bit should be done by default when running SETI with a GPU as I can see that without it it will bring a lot of rigs to their knees and likely make new members give up real quick). So far, all attempts I have made to reserve a core for non-Boinc work have resulted in all the cores being slowed down rather than one discrete core being "idled." I haven't had, so far, any trouble assigning a core or even multiple cores to a gpu. Is that what your talking about? Or are you talking about using the various "maximum" project or individual tasks parms in app_config.xml files to limit the cumulative totals of tasks to less than the total number of cores (that would be tricky if you have a lot of different projects)? Sorry if I am not clear. So how do you do this? {quote] I had to manually reserve a CPU core {/quote] Thank you, Tom A proud member of the OFA (Old Farts Association). |
Tom M Send message Joined: 28 Nov 02 Posts: 5126 Credit: 276,046,078 RAC: 462 |
My 2nd 1060 arrived yesterday @2:30 pm and was up and running an hour later. So far so good. It is set to run 50/50 Seti and Einstein, but for some reason it's only running Einstein atm. Boinc will Seti it's fair share hopefully soon. I think I have been seeing a refusal to download more Seti tasks while the current cache is "too full" so that might be what is going on with you. Tom A proud member of the OFA (Old Farts Association). |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242
|
I find I can't run both at the same time with same values of 100 or even 50. One of the projects has to be a back up for the other. If Einstein is running, BOINC looks at how many work units are in the cache and will not allow Seti to download any work units. I literally have to run the cache down before it will allow any work from Seti to download. |
Darrell Send message Joined: 14 Mar 03 Posts: 267 Credit: 1,418,681 RAC: 0
|
IDK, I think there's a problem on the server. I run with no cache and I haven't gotten a task since 8am on the 11th. Setting the work fetch debug flag, Seti and Seti Beta have been in a resource back-off all day. Edit: Speak of a server problem and the server finally coughs one up. Or it could be due to the preference change I made just before making the original post. |
betreger ![]() Send message Joined: 29 Jun 99 Posts: 11451 Credit: 29,581,041 RAC: 66
|
My 2nd 1060 arrived yesterday @2:30 pm and was up and running an hour later. So far so good. It is set to run 50/50 Seti and Einstein, but for some reason it's only running Einstein atm. Boinc will Seti it's fair share hopefully soon. That's not what's going on I have a lot of Seti Nividia tasks downloaded waiting to run. Methinks Boinc believes I have an Einstein debt to pay back because I was running the Previous GTX660 5/8 Seti 3/8 Einstein. |
betreger ![]() Send message Joined: 29 Jun 99 Posts: 11451 Credit: 29,581,041 RAC: 66
|
After running almost 48 hr running straight Einstein Boinc has started switching back and forth between Seti and Einstein. |
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628
|
Early Xmas and B'day presents, yeah they are always welcome here, soon I think I'll be getting something old, My dad's slides, and His projector and screen, not exactly computer related, but it's something, My sister in law is moving since She sold Her house... . . HI Zoom . . Congrats on the heirlooms, old can be good too. . . As for the 970s, I am running a brace of Gainward 970 Phoenix cards and though Gainward state they are 140W cards they show as 180W TDP under Linux and they use between 130W and 150W when crunching using CUDA80, this is in line with the power they were consuming when I was running SoG on them under Windows. But yes second hand they would cost much less than new 1070s. Still how much could you save on 1060 3GB in price (probably not much dearer than the 970s) and they use way less power (mine use about 80W to 90W crunching with CUDA80) so over time big savings on power costs for performance very similar to the 970s. Stephen :) |
Wiggo Send message Joined: 24 Jan 00 Posts: 38452 Credit: 261,360,520 RAC: 489
|
Well my RAC this winter is now up to 118K (with the temporary help of 2 extra rigs). The 3570K's RAC with the dual ASUS DUAL-GTX1060-O3G's is still sitting in the high 45K region. The 2500K with the dual Gainward GeForce GTX 1060 3GB is still sitting in the mid 44K range. The Athlon II X4 630 with the dual Gainward GTX 660's is sitting in the mid 22K range. The old C2D E6300 with a Gainward GTX 550 Ti is now up to 5.4K after 17 days. The Athlon and C2D rigs will keep running for another 6-9 weeks to keep me warm until spring properly arrives here in the highlands down under. Cheers. |
Wiggo Send message Joined: 24 Jan 00 Posts: 38452 Credit: 261,360,520 RAC: 489
|
I must say that the latest BOINC version is even worse than the version I tried a few years back and I had to manually reserve a CPU core so that everything stays happy (this last bit should be done by default when running SETI with a GPU as I can see that without it it will bring a lot of rigs to their knees and likely make new members give up real quick). By using the "On multiprocessor system, use at most" set to 50% for that system to make up for all GPU work (both Cuda and OpenCL) using a setting of 0.146 cores instead of a full core. A fully stock setup will run 2 CPU tasks plus a GPU task which once an OpenCL task hits the GPU it will bring the CPU to its knees and even running a Cuda task makes use of 60-70% of the whole E6300 while a core is being reserved. Cheers. |
Tom M Send message Joined: 28 Nov 02 Posts: 5126 Credit: 276,046,078 RAC: 462 |
By using the "On multiprocessor system, use at most" set to 50% for that system to make up for all GPU work (both Cuda and OpenCL) using a setting of 0.146 cores instead of a full core. . I'm sorry but I'm still confused. I set the local preferences to 50% of available cpus. The number of tasks that Boinc was showing running went down by half. But the task manager still shows all cores active at a reduced processing level. Are you saying that you also need to set the "app_config.xml" files for "0.146 / cpu"? I am still trying to see how you "idle" a core under Boinc. Thank you, Tom A proud member of the OFA (Old Farts Association). |
Wiggo Send message Joined: 24 Jan 00 Posts: 38452 Credit: 261,360,520 RAC: 489
|
By using the "On multiprocessor system, use at most" set to 50% for that system to make up for all GPU work (both Cuda and OpenCL) using a setting of 0.146 cores instead of a full core. . No I'm reserving a core for the GPU's use, not to have 1 core idle Tom. I have 1 CPU task and 1 GPU task running instead of the full stock setup of 2 CPU tasks and 1 GPU task with this rig. Except for doing that this rig is setup the way a first timer (n00b) would have it (no cc or app config are being files used at all), but a first timer without the knowledge would find the stock settings crippling to say the very least and that should be changed so first timers don't give up because their PC's become unusable/unstable. The stock 0.146% core use for all GPU task support needs to be changed (especially for OpenCL tasks) if we want to keep new members here instead of driving them away and leaving their tasks to timeout. ;-) Cheers. |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242
|
By using the "On multiprocessor system, use at most" set to 50% for that system to make up for all GPU work (both Cuda and OpenCL) using a setting of 0.146 cores instead of a full core. . Tom, Put your "use at most" back to 100% When you put it at 50%, you are telling the computer it can and will only have access to half of available cores. Meanwhile, the programs will still attempt to run as many work units as you have specified in your different app_config.xml. Example, you have an 8 core(either 4 physical and 4 virtual or 8 physical and no virtual) You tell the computer it can "use at most" 50% or 4 cores of either type of CPU chip Your app_config.xml for all your projects total 8 combined cores. 8 work units will attempt to run. If a work unit uses only 40% of a core, that means 60% is free for another work units. So those 2 work units will "share" that core up to 100% of 1 core. So you can see here that if each work unit only uses a portion, then another work unit will try to use those unused %, ie running to the max you have allowed. When Wiggo is talking about setting 0.146 cores, he is describing a limitation of what you would want in your app_config.xml. That only works on the old Cuda 50 and cuda 42 some people still run and of course Petri special app which is not yet ready for generalized public use. That leaves only SoG. SoG unfortunately will not follow such a value unless you use a commandline to decrease that usage. -use_sleep Of course all of this is strictly Seti@home related. I can make no comments on how other projects use your resources....
|
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.