Tom (Dec 23 2008)

Message boards : Technical News : Tom (Dec 23 2008)
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3

AuthorMessage
Profile computerguy09
Volunteer tester
Avatar

Send message
Joined: 3 Aug 99
Posts: 80
Credit: 9,570,364
RAC: 3
United States
Message 847294 - Posted: 31 Dec 2008, 14:57:26 UTC - in response to Message 846999.  

Hi Guys

I'm at work atm, so will try this after I get home.

I think I was a little tired yesterday, and SOME of the threads are am\\making more sense today.

After what happened with the cut and paste cc_config.xml last night, I must say I'm a little wary of doing it again, but "Nothing ventured, nothing gained".


Here's some free advice. If the projects set the % of CPU required for the GPU application to the right value, BOINC *should* manage the number of WU's of the correct type for you. By this, I mean that you shouldn't have to set NCPU's in the config file to anything higher than the actual number of CPUs/cores. I have seen 5 WU's running on my quad with a single GPU card. However, for some projects (GPUGRID, for example), the amount of CPU required to keep the GPU app fed takes 60% of a core, and so trying to run a CPU WU with a GPU WU isn't worth it. This only occurs on Windows - the Linux apps take only about 10-20% of a core to keep the GPUGRID app fed.

So, my advice is to stay away from the config XML file, and run more than one project so that you can easily keep all GPU and CPU cores running.

Mark

ID: 847294 · Report as offensive
Profile littlegreenmanfrommars
Volunteer tester
Avatar

Send message
Joined: 28 Jan 06
Posts: 1410
Credit: 934,158
RAC: 0
Australia
Message 848121 - Posted: 2 Jan 2009, 9:55:41 UTC - in response to Message 847294.  

Thanks CG

I am running Einstein, Beta and S@h. It seems AP and CUDA apps cannot run simultaneously on the same CPU.

If the indication of "0.05 CPU's 1 CUDA" shown under the tasks tab is accurate, it would seem the CUDA apps are only using 5% of the second CPU.

There is mention in another thread of this version of BOINC (6.4.5) having a bug in the "work fetch" module. If that is correct, I shall just have to wait for an updated version to be released.

Thanks to all of you for your time, patience and helpful comments. I shall have to just be patient, it seems.

Happy New Year! :)
ID: 848121 · Report as offensive
Profile Sharlee

Send message
Joined: 4 Jun 00
Posts: 8
Credit: 737,012
RAC: 0
United States
Message 848150 - Posted: 2 Jan 2009, 12:31:37 UTC - in response to Message 846348.  

I have 3 projects running on a dual core...and all is well. Set-up for the cuda app is # of cores + 1.
ID: 848150 · Report as offensive
Profile littlegreenmanfrommars
Volunteer tester
Avatar

Send message
Joined: 28 Jan 06
Posts: 1410
Credit: 934,158
RAC: 0
Australia
Message 854912 - Posted: 18 Jan 2009, 4:34:44 UTC

Well, after a couple of weeks' observation, BOINC occasionally runs 3 Wu's simultaneously, but often is running only three.
I shall have to be content with that, I think.
ID: 854912 · Report as offensive
Profile Misfit
Volunteer tester
Avatar

Send message
Joined: 21 Jun 01
Posts: 21804
Credit: 2,815,091
RAC: 0
United States
Message 882968 - Posted: 7 Apr 2009, 2:34:34 UTC - in response to Message 844631.  
Last modified: 7 Apr 2009, 2:34:49 UTC

However, when your client requests work from our scheduling server, the scheduler process looks at the "feeder" which holds at any given time the names of 100 available workunits to send out.

Does the "100" amount pertain only to SETI or is this defaulted BOINCwide?
me@rescam.org
ID: 882968 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 882980 - Posted: 7 Apr 2009, 3:03:11 UTC - in response to Message 882968.  

However, when your client requests work from our scheduling server, the scheduler process looks at the "feeder" which holds at any given time the names of 100 available workunits to send out.

Does the "100" amount pertain only to SETI or is this defaulted BOINCwide?

Quoting from the sched_shmem.h file in the BOINC source code:

// Default number of work items in shared mem.
// You can configure this in config.xml (<shmem_work_items>)
// If you increase this above 100,
// you may exceed the max shared-memory segment size
// on some operating systems.
//
#define MAX_WU_RESULTS 100

See also http://boinc.berkeley.edu/trac/wiki/ProjectOptions#Scheduling:job-cachescheduling.
                                                                Joe
ID: 882980 · Report as offensive
Previous · 1 · 2 · 3

Message boards : Technical News : Tom (Dec 23 2008)


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.