Message boards :
Number crunching :
Intel GPU
Message board moderation
Previous · 1 · 2
Author | Message |
---|---|
Dave Stegner Send message Joined: 20 Oct 04 Posts: 540 Credit: 65,583,328 RAC: 27 |
Running no other projects Installed lunatics and added -SBS 128 nothing else Dave |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
What is running a high percentage in your task manager ... if the GPU is killing the CPU there has to be something running to do it. |
BONNSaR Send message Joined: 9 Nov 04 Posts: 38 Credit: 21,538,589 RAC: 9 |
Been with Seti long time, thought I would try GPU on new computer I notice this host has 4MB memory. If you look at performance tab from task manager, how much memory is your computer actually using when running the iGPU tasks ? In BIOS how much memory have you assigned to iGPU ? 512MB seems to run ok for me for HD4000 iGPU. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
I'm not overly talented at this but ... I'm sure you can do something like 0.02 CPU usage for your GPU, that way it won't tax your CPU and give a little to the GPU. A setting of <max_ncpus>0.02</max_ncpus> in an app_info.xml has nothing to do with the applications CPU usage while processing. It is used by BOINC for scheduling purposes. A setting of <cpu_usage>0.02</cpu_usage> in an app_config.xml is used to reserve CPU cores. Such as setting <cpu_usage>0.02</cpu_usage> while running 2 GPU instances to reserve 1 CPU core. On an i5, or probably all Ivy Bridge/Haswell CPUs, when running SETI@home it doesn't matter if you are running 1 CPU instances, leaving the other 3 cores free, when running the iGPU as well. The CPU times will be increased to nearly double. It is probably a limitation of the hardware. As the iGPU does not have any memory of its own. It uses shared CPU cache & system memory. The theory of cache thrashing seems to be supported by the fact the J1900 Celeron, BayTrail hardware, I have does not show this same symptom when using the iGPU. Instead of a single shared cache it uses a separate for every 2 cores. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
I stopped one after 7 + hours and it was 46% complete I wonder if that has to do with your driver version 10.18.14.4156. In the world of GPU crunching latest rarely is the greatest. As such the last driver version 15.36.14.4080 were listed as DO NOT USE for SETI@home. As it would produce errors. When we were discussing iGPU drivers last. Version 10.18.10.4061 with 10.18.10.3621 as fall back were known to be good. Versions 4156 may be good as well, but falls into unknown/untested. Importantly the driver version is probably not related to that increase in CPU times. At least in the sense there isn't a driver that will magically fix the issue, at the moment. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
A setting of <max_ncpus>0.02</max_ncpus> in an app_info.xml has nothing to do with the applications CPU usage while processing. It is used by BOINC for scheduling purposes. That won't have any effect at all. BOINC allows the CPU to be over-committed until the sum of all <cpu_usage> and <max_ncpus> fractions, across all running co-processor tasks, reaches 1.000000: actually, I think <avg_ncpus> is dominant in app_info.xml, but that makes little difference to the principle. With a value of <cpu_usage>0.02</cpu_usage> in app_config, BOINC will allow 49 GPU tasks to run in parallel, before reserving an extra core - but your hardware would probably have crashed long before that. To reserve the core at two tasks, the <cpu_usage> value would need to be between 0.5 and 0.99 |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
A setting of <max_ncpus>0.02</max_ncpus> in an app_info.xml has nothing to do with the applications CPU usage while processing. It is used by BOINC for scheduling purposes. I was doing a copy/paste with <cpu_usage>0.02</cpu_usage> & forgot to change that to <cpu_usage>0.5</cpu_usage> for my example. Examples are much better when actually accurate! Thanks for catching that. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
Since we have the big boys in here talking about CPU usage settings :D I have always wondered... Is there any difference in: GPU 0.5 CPU 0.4 or GPU 0.5 CPU 0.3 Does the GPU task use more or less CPU as indicated, or is it just a limit as to when to shut down a CPU to feed the GPU? |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Since we have the big boys in here talking about CPU usage settings :D The the full descriptions of what the settings for the app_info.xml & app_config.xml are on the BOINC wiki. But the simple answer it is just to reserve CPU cores. So you can set the value for the CPU from GPU 0.5 CPU 0.01 to GPU 0.5 CPU 0.49 and nothing will change for a single GPU system. If you had 2 GPUs and set GPU 0.5 CPU 0.25 Then 1 CPU core would get reserved for the 4 GPU instances. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
Thanks Hal for confirming for me that GPU 0.5 are completely the same for actual CPU usage if 1 GPU ... |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.