Message boards :
Technical News :
Tom (Dec 23 2008)
Message board moderation
Previous · 1 · 2 · 3
Author | Message |
---|---|
computerguy09 Send message Joined: 3 Aug 99 Posts: 80 Credit: 9,570,364 RAC: 3 |
Hi Guys Here's some free advice. If the projects set the % of CPU required for the GPU application to the right value, BOINC *should* manage the number of WU's of the correct type for you. By this, I mean that you shouldn't have to set NCPU's in the config file to anything higher than the actual number of CPUs/cores. I have seen 5 WU's running on my quad with a single GPU card. However, for some projects (GPUGRID, for example), the amount of CPU required to keep the GPU app fed takes 60% of a core, and so trying to run a CPU WU with a GPU WU isn't worth it. This only occurs on Windows - the Linux apps take only about 10-20% of a core to keep the GPUGRID app fed. So, my advice is to stay away from the config XML file, and run more than one project so that you can easily keep all GPU and CPU cores running. Mark |
littlegreenmanfrommars Send message Joined: 28 Jan 06 Posts: 1410 Credit: 934,158 RAC: 0 |
Thanks CG I am running Einstein, Beta and S@h. It seems AP and CUDA apps cannot run simultaneously on the same CPU. If the indication of "0.05 CPU's 1 CUDA" shown under the tasks tab is accurate, it would seem the CUDA apps are only using 5% of the second CPU. There is mention in another thread of this version of BOINC (6.4.5) having a bug in the "work fetch" module. If that is correct, I shall just have to wait for an updated version to be released. Thanks to all of you for your time, patience and helpful comments. I shall have to just be patient, it seems. Happy New Year! :) |
Sharlee Send message Joined: 4 Jun 00 Posts: 8 Credit: 737,012 RAC: 0 |
I have 3 projects running on a dual core...and all is well. Set-up for the cuda app is # of cores + 1. |
littlegreenmanfrommars Send message Joined: 28 Jan 06 Posts: 1410 Credit: 934,158 RAC: 0 |
Well, after a couple of weeks' observation, BOINC occasionally runs 3 Wu's simultaneously, but often is running only three. I shall have to be content with that, I think. |
Misfit Send message Joined: 21 Jun 01 Posts: 21804 Credit: 2,815,091 RAC: 0 |
However, when your client requests work from our scheduling server, the scheduler process looks at the "feeder" which holds at any given time the names of 100 available workunits to send out. Does the "100" amount pertain only to SETI or is this defaulted BOINCwide? me@rescam.org |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
However, when your client requests work from our scheduling server, the scheduler process looks at the "feeder" which holds at any given time the names of 100 available workunits to send out. Quoting from the sched_shmem.h file in the BOINC source code: // Default number of work items in shared mem. See also http://boinc.berkeley.edu/trac/wiki/ProjectOptions#Scheduling:job-cachescheduling. Joe |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.