Seti@home v8

Message boards : Number crunching : Seti@home v8
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 7 · 8 · 9 · 10

AuthorMessage
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1758942 - Posted: 24 Jan 2016, 3:46:18 UTC - in response to Message 1758930.  

BTW, I see I have some ap's coming up and they are running on the 730. The config file has the gpu and the cpu usage both at .5, which is different than the non ap units at .5 on gpu and .04 on cpu. Will this be a problem when it goes to run the ap's? I noticed that they are opencl nvida 100 too.

Personally, I prefer not to run 2 APs at a time on a single GPU, although 1 AP plus 1 MB is usually fine. To allow for either 2 MBs at a time or 1 AP plus 1 MB, change your GPU usage to 0.51 for Astropulse tasks and 0.49 for the MultiBeam (setiathome_v8 and _v7).

As far as CPU usage goes, there are strong feelings around here that a full core should be reserved for each AP task. I tend not to subscribe fully to that philosophy, but on my multi-GPU systems, where an AP might be running on each of my GPUs at the same time, I use that 0.5 figure for AP CPU usage, effectively reserving a full core only when at least 2 APs are running. The CPU usage value only comes into play when the total value of all running tasks equals or exceeds an integer amount (1, 2, 3, etc.). In your current setup, you'd only reach that threshold if you did, in fact, let 2 APs run at the same time.

If you do choose to reserve a full core for AP, simply change your cpu_usage value to 1.0. What you'll notice, in that case, is that one of your running CPU tasks will drop into a "Waiting to run" state, and will remain that way either until the AP task or another CPU task finishes, at which point it will automatically resume.
ID: 1758942 · Report as offensive
Profile AllenIN
Volunteer tester
Avatar

Send message
Joined: 5 Dec 00
Posts: 292
Credit: 58,297,005
RAC: 311
United States
Message 1758954 - Posted: 24 Jan 2016, 5:44:57 UTC - in response to Message 1758942.  

BTW, I see I have some ap's coming up and they are running on the 730. The config file has the gpu and the cpu usage both at .5, which is different than the non ap units at .5 on gpu and .04 on cpu. Will this be a problem when it goes to run the ap's? I noticed that they are opencl nvida 100 too.

Personally, I prefer not to run 2 APs at a time on a single GPU, although 1 AP plus 1 MB is usually fine. To allow for either 2 MBs at a time or 1 AP plus 1 MB, change your GPU usage to 0.51 for Astropulse tasks and 0.49 for the MultiBeam (setiathome_v8 and _v7).

As far as CPU usage goes, there are strong feelings around here that a full core should be reserved for each AP task. I tend not to subscribe fully to that philosophy, but on my multi-GPU systems, where an AP might be running on each of my GPUs at the same time, I use that 0.5 figure for AP CPU usage, effectively reserving a full core only when at least 2 APs are running. The CPU usage value only comes into play when the total value of all running tasks equals or exceeds an integer amount (1, 2, 3, etc.). In your current setup, you'd only reach that threshold if you did, in fact, let 2 APs run at the same time.

If you do choose to reserve a full core for AP, simply change your cpu_usage value to 1.0. What you'll notice, in that case, is that one of your running CPU tasks will drop into a "Waiting to run" state, and will remain that way either until the AP task or another CPU task finishes, at which point it will automatically resume.


Sounds like very good advice. I will reconfigure my gpu settings for the ap's. If that works, great, if not, I will go with the full cpu for the ap's. I would have never figured this out for myself.
ID: 1758954 · Report as offensive
Profile AllenIN
Volunteer tester
Avatar

Send message
Joined: 5 Dec 00
Posts: 292
Credit: 58,297,005
RAC: 311
United States
Message 1758956 - Posted: 24 Jan 2016, 6:04:16 UTC - in response to Message 1758954.  
Last modified: 24 Jan 2016, 6:05:25 UTC

Jeff,

Just received 3 ap's on one of my other rigs. This one is running an apu and it automatically shut down one cpu core and is running two of the ap's on the gpu. I'm a bit surprised as I didn't believe the apu/gpu could handle that kind of load. It says I'm at 55c which is in the good range for this system. Still, I see all 4 cores of the cpu working, but only 3 cpu wu's running and 2 gpu. Very interesting as I've never seen this before.
ID: 1758956 · Report as offensive
Profile Jimbocous Project Donor
Volunteer tester
Avatar

Send message
Joined: 1 Apr 13
Posts: 1853
Credit: 268,616,081
RAC: 1,349
United States
Message 1758958 - Posted: 24 Jan 2016, 6:39:55 UTC - in response to Message 1758942.  

Personally, I prefer not to run 2 APs at a time on a single GPU, although 1 AP plus 1 MB is usually fine. To allow for either 2 MBs at a time or 1 AP plus 1 MB, change your GPU usage to 0.51 for Astropulse tasks and 0.49 for the MultiBeam (setiathome_v8 and _v7).

Thanks, Jeff. Great tip!
ID: 1758958 · Report as offensive
Profile Jimbocous Project Donor
Volunteer tester
Avatar

Send message
Joined: 1 Apr 13
Posts: 1853
Credit: 268,616,081
RAC: 1,349
United States
Message 1759798 - Posted: 27 Jan 2016, 21:41:25 UTC - in response to Message 1753485.  
Last modified: 27 Jan 2016, 21:44:04 UTC

I have an 8 core hyperthreaded to 16 and only have 100 total CPU work units.

Zalster, if you would, I'm curious to know what your experience has been regarding hyperthreading? Specifically, have you determined that there really is an improvement in overall throughput versus leaving it turned off?
Anyone else have experience or results on this?
Thanks!
ID: 1759798 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1759806 - Posted: 27 Jan 2016, 22:27:00 UTC - in response to Message 1759798.  
Last modified: 27 Jan 2016, 22:27:58 UTC

The only chip that I currently have that has Hyperthreading is the 5960X (8 core HT to 16)

The majority of the cores are used to support the GPUs.

But I still run 4 MB Vlars on them. At one time I did do 8 MB VLARs but notice the time to complete was much greater than when I did 4.

I tried 6 for a while and while not as great of a time to complete it still was more than the 4.

It becomes a try it and see what your times are on both the GPU work units and CPU and find a balance.

Now, when we were doing Beta testing on the Opencl_nvidia_sah for VLARs, I found that I needed to suspend all work on the CPU to give the GPU total access.

I've tried the HT with lower Intel chips and found they really didn't add much to the mix, if anything they actually hindered work, so I put them back to just actual core numbers.

So I think it's a question of the model number of the Intel chip. The higher the better the result.

my 2 cents.
ID: 1759806 · Report as offensive
Profile Jimbocous Project Donor
Volunteer tester
Avatar

Send message
Joined: 1 Apr 13
Posts: 1853
Credit: 268,616,081
RAC: 1,349
United States
Message 1759826 - Posted: 27 Jan 2016, 23:29:20 UTC - in response to Message 1759806.  
Last modified: 27 Jan 2016, 23:29:50 UTC

my 2 cents.

Thanks, that reinforces my experiences as well. Was wondering if I'd missed something obvious.
Atm, I'm running a bit of a test with the two Xeon boxes (W3550s). One running 8 jobs, the other running four. Seems to me if the complete time is < 2.0x that's a win. Right now it seems to be ~ 1.5. Guess we'll see what the RAC shows after a few weeks.
Like you I noticed on beta that the Open_Cl stuff really needed a true core. Thankfully, on Main it's Cuda 5.0 instead. OF course when APs hit, that's a different deal, but I think by restricting it to 1 AP per GPU it might stay where I want it to. We'll see ...
Thanks for the note ...
ID: 1759826 · Report as offensive
Lionel

Send message
Joined: 25 Mar 00
Posts: 680
Credit: 563,640,304
RAC: 597
Australia
Message 1759839 - Posted: 28 Jan 2016, 0:30:45 UTC - in response to Message 1759826.  


Atm, I'm running a bit of a test with the two Xeon boxes (W3550s). One running 8 jobs, the other running four. Seems to me if the complete time is < 2.0x that's a win. Right now it seems to be ~ 1.5. Guess we'll see what the RAC shows after a few weeks.
Like you I noticed on beta that the Open_Cl stuff really needed a true core. Thankfully, on Main it's Cuda 5.0 instead. OF course when APs hit, that's a different deal, but I think by restricting it to 1 AP per GPU it might stay where I want it to. We'll see ...
Thanks for the note ...


That would stack up with what I used to see with my Q6600 and Q9450. With HT on, time to complete would be about 50% longer but I would be doing 100% more WUs, so there was a net gain. I still have them but I run them with HT off these days (as with the others) and let the GPUs do most of the work.

rgds
ID: 1759839 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1759846 - Posted: 28 Jan 2016, 1:25:57 UTC - in response to Message 1759839.  

That would stack up with what I used to see with my Q6600 and Q9450. With HT on,


The Core 2 series did not have Hyperthreading.
ID: 1759846 · Report as offensive
Previous · 1 . . . 7 · 8 · 9 · 10

Message boards : Number crunching : Seti@home v8


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.