Message boards :
Number crunching :
Seti@home v8
Message board moderation
Previous · 1 . . . 7 · 8 · 9 · 10
Author | Message |
---|---|
Jeff Buck Send message Joined: 11 Feb 00 Posts: 1441 Credit: 148,764,870 RAC: 0 |
BTW, I see I have some ap's coming up and they are running on the 730. The config file has the gpu and the cpu usage both at .5, which is different than the non ap units at .5 on gpu and .04 on cpu. Will this be a problem when it goes to run the ap's? I noticed that they are opencl nvida 100 too. Personally, I prefer not to run 2 APs at a time on a single GPU, although 1 AP plus 1 MB is usually fine. To allow for either 2 MBs at a time or 1 AP plus 1 MB, change your GPU usage to 0.51 for Astropulse tasks and 0.49 for the MultiBeam (setiathome_v8 and _v7). As far as CPU usage goes, there are strong feelings around here that a full core should be reserved for each AP task. I tend not to subscribe fully to that philosophy, but on my multi-GPU systems, where an AP might be running on each of my GPUs at the same time, I use that 0.5 figure for AP CPU usage, effectively reserving a full core only when at least 2 APs are running. The CPU usage value only comes into play when the total value of all running tasks equals or exceeds an integer amount (1, 2, 3, etc.). In your current setup, you'd only reach that threshold if you did, in fact, let 2 APs run at the same time. If you do choose to reserve a full core for AP, simply change your cpu_usage value to 1.0. What you'll notice, in that case, is that one of your running CPU tasks will drop into a "Waiting to run" state, and will remain that way either until the AP task or another CPU task finishes, at which point it will automatically resume. |
AllenIN Send message Joined: 5 Dec 00 Posts: 292 Credit: 58,297,005 RAC: 311 |
BTW, I see I have some ap's coming up and they are running on the 730. The config file has the gpu and the cpu usage both at .5, which is different than the non ap units at .5 on gpu and .04 on cpu. Will this be a problem when it goes to run the ap's? I noticed that they are opencl nvida 100 too. Sounds like very good advice. I will reconfigure my gpu settings for the ap's. If that works, great, if not, I will go with the full cpu for the ap's. I would have never figured this out for myself. |
AllenIN Send message Joined: 5 Dec 00 Posts: 292 Credit: 58,297,005 RAC: 311 |
Jeff, Just received 3 ap's on one of my other rigs. This one is running an apu and it automatically shut down one cpu core and is running two of the ap's on the gpu. I'm a bit surprised as I didn't believe the apu/gpu could handle that kind of load. It says I'm at 55c which is in the good range for this system. Still, I see all 4 cores of the cpu working, but only 3 cpu wu's running and 2 gpu. Very interesting as I've never seen this before. |
Jimbocous Send message Joined: 1 Apr 13 Posts: 1853 Credit: 268,616,081 RAC: 1,349 |
Personally, I prefer not to run 2 APs at a time on a single GPU, although 1 AP plus 1 MB is usually fine. To allow for either 2 MBs at a time or 1 AP plus 1 MB, change your GPU usage to 0.51 for Astropulse tasks and 0.49 for the MultiBeam (setiathome_v8 and _v7). Thanks, Jeff. Great tip! |
Jimbocous Send message Joined: 1 Apr 13 Posts: 1853 Credit: 268,616,081 RAC: 1,349 |
I have an 8 core hyperthreaded to 16 and only have 100 total CPU work units. Zalster, if you would, I'm curious to know what your experience has been regarding hyperthreading? Specifically, have you determined that there really is an improvement in overall throughput versus leaving it turned off? Anyone else have experience or results on this? Thanks! |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
The only chip that I currently have that has Hyperthreading is the 5960X (8 core HT to 16) The majority of the cores are used to support the GPUs. But I still run 4 MB Vlars on them. At one time I did do 8 MB VLARs but notice the time to complete was much greater than when I did 4. I tried 6 for a while and while not as great of a time to complete it still was more than the 4. It becomes a try it and see what your times are on both the GPU work units and CPU and find a balance. Now, when we were doing Beta testing on the Opencl_nvidia_sah for VLARs, I found that I needed to suspend all work on the CPU to give the GPU total access. I've tried the HT with lower Intel chips and found they really didn't add much to the mix, if anything they actually hindered work, so I put them back to just actual core numbers. So I think it's a question of the model number of the Intel chip. The higher the better the result. my 2 cents. |
Jimbocous Send message Joined: 1 Apr 13 Posts: 1853 Credit: 268,616,081 RAC: 1,349 |
my 2 cents. Thanks, that reinforces my experiences as well. Was wondering if I'd missed something obvious. Atm, I'm running a bit of a test with the two Xeon boxes (W3550s). One running 8 jobs, the other running four. Seems to me if the complete time is < 2.0x that's a win. Right now it seems to be ~ 1.5. Guess we'll see what the RAC shows after a few weeks. Like you I noticed on beta that the Open_Cl stuff really needed a true core. Thankfully, on Main it's Cuda 5.0 instead. OF course when APs hit, that's a different deal, but I think by restricting it to 1 AP per GPU it might stay where I want it to. We'll see ... Thanks for the note ... |
Lionel Send message Joined: 25 Mar 00 Posts: 680 Credit: 563,640,304 RAC: 597 |
That would stack up with what I used to see with my Q6600 and Q9450. With HT on, time to complete would be about 50% longer but I would be doing 100% more WUs, so there was a net gain. I still have them but I run them with HT off these days (as with the others) and let the GPUs do most of the work. rgds |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
That would stack up with what I used to see with my Q6600 and Q9450. With HT on, The Core 2 series did not have Hyperthreading. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.