Message boards :
Number crunching :
Question for anyone running BoincTasks
Message board moderation
Author | Message |
---|---|
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
Looking at the CPU% and I've noticed that there is now a redbox around the normally Blue color of the % of the CPU. I've tried to look and see what this means but not getting anywhere. Can anyone tell me what that means when the Redbox fills the field? |
Bernie Vine Send message Joined: 26 May 99 Posts: 9954 Credit: 103,452,613 RAC: 328 |
Looking at the CPU% and I've noticed that there is now a redbox around the normally Blue color of the % of the CPU. I've tried to look and see what this means but not getting anywhere. I assume you haven't changed the colours? If not it usually means some sort of error, what is the % value in the box. I have had it occur when something else is using a high amount of CPU. Check that nothing unusual is running. |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
I'm over on beta running some of their new OpenCl MB and I just started to notice the redbox. Right now it's at 94.87%. The default is supposed to be 0.427 C + 1 NV. I send Raistmer an email asking if that is normal but I was wondering since that red box popped up. Never seen it before and I have not changed anything. This is a pure cruncher, no other activity Thanks for the explanation. I'll see how it does |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
ohh I noticed that too a month ago with the newer AP apps. It's a warning for high or low CPU usage. Adding -unroll to command line uses a LOT less CPU for a GPU task. |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
Thanks Brent. I do that already for the APs but over on Beta I'm running stock to see what they will do. First time for me seeing the red box, and across all work units. I'm monitoring them just to make sure. Thanks Zalster |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
I don't know why they are writing apps that require you to add -unroll to the command line. There is NO reason for the GPU to use 100% of a CPU core to do a task! On my 2-core machine, running 2 GPU tasks renders the machine useless for doing anything else unless you add -unroll. I think -unroll should be the default for apps, and add -roll as an option if you want to increase GPU performance with the sacrifice of CPU usage. |
Mark Stevenson Send message Joined: 8 Sep 11 Posts: 1736 Credit: 174,899,165 RAC: 91 |
I don't know why they are writing apps that require you to add -unroll to the command line. There is NO reason for the GPU to use 100% of a CPU core to do a task! I think the lunatics people who write the apps for optimized apps DONT write them for 1 specific GPU card but so they will work over a range of cards from "old" cards to the latest and greatest gpu that's just been released . When you download a new driver its specific for that card it aint 1 driver will run every card . The seti apps are the opposite to that . Its why for different cards there's different settings you can "tweek" if you feel the need to like unroll etc I run 7 gti 750's and aint found any need to use anything like the unroll commands all I do is edit the app info file to run more than 1 task at a time . What I got that set at is my business and my settings work For me on my machines but that don't mean my settings would work for you |
Bob Giel Send message Joined: 11 Jan 04 Posts: 76 Credit: 5,419,128 RAC: 0 |
Looking at the CPU% and I've noticed that there is now a redbox around the normally Blue color of the % of the CPU. I've tried to look and see what this means but not getting anywhere. Warnings In the tasks view the CPU % column needs to be visible. A warning color will appear, when a task matches one of the rules. The first rule, will generate a warning, if a task is assigned to the full CPU only and is running below a CPU% of 50%. The second rule is a typical GPU task, that is assigned to a percentage of a CPU core and is assigned to 1 full GPU. If the CPU% is above 50% the rule is active. In this case the CPU is using more than 50% of its time feeding the GPU, this may indicate an error. |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
Mark, I understand what you are saying ... When you load Luantics you sign up for tweaking apps. That is the users decision. But as for stock apps, they should run without problems UNLESS you want to tweak them. |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
Ok, Thanks to everyone that replied. I talked to Raistmer and he said for the BETA site, it is normal for them to act this way. I had just never seen the red box before and that is why I asked. All is good Zalster |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
Some final comments. The MB7 OpenCL NVIDIA apps don't have the -unroll option, nor even the -use_sleep option which is the actual one used to drastically reduce CPU usage on the Astropulse OpenCL NVIDIA apps. There are other options which might have some effect on CPU usage. The cause of high CPU usage is basically the way NVIDIA has implemented OpenCL in all recent drivers. If the NVIDIA GPU is sufficiently better than the CPU driving it, best productivity may simply be achieved by freeing a CPU to support each GPU task. Windows NVIDIA MB7 OpenCL apps have never before been released at Beta and there's much to learn. That's what Beta testing is for. They are not expected to make the transition to the main project, the CUDA builds are faster for the angle ranges being sent to GPUs here. Joe |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
I kind of liked the OpenCL for only 1 reason. I crunched 2 (and only 2, wish there had be more) VLARs. Both finished in 20 minutes. As compared to 2 hours on the CPU. Aside from that, you are right about the CUDAs being much faster. Thanks for the info Joe. Zalster Edit.. The GPUs were running only 1 work unit each and the 8 core wasn't crunching any at all, only supporting the GPUs |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Some final comments. Yeah there's some interesting background issues there, all intertwined with various exploratory paths. The nv driver 'spins on a full core' threshold exists in Cuda too, to some degree, and for OpenCL is at a work (kernel) launch rate any faster than ~30 launches per second. That stems from the underlying use of the graphics/OS functionality, and its synchronisation primitives. It becomes increasingly challenging to give these fast GPUs enough work to stay loaded, last for longer than 1/30th of a second, but less than such an amount that will start to induce noticeable display lag issues (where one is connected). In the specific 980 case, newer Cuda code ends up ~20-50% faster (depending on angle range) with allowing more work to spread and using latency hiding techniques [with low cpu demand blocking synchronisation still]. Back with more experimental lines, using considerable CPU resources for difficult to parallelise portions, was tried as an option (hybrid applications) and yielded a system dependant doubling in throughput at the cost of CPU resources. It might be getting closer to a time where the option to use CPU constructively (as opposed to just for synchronisation/feeding/reduction) can be brought back. Though the system dependant nature of that kindof operation made it more or less a deadend in the past, the move to a more heterogeneous and configurable regime is likely, with basics covered and the need for supporting tools (such as bench based automation, user configuration needs, install time and run time optimisation) better understood than before. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Thanks jason, although I have to admit that most of your explanations is way way above my knowledge about the inner workings of CUDA and OpenCL. Oh it's a challenge for me too. Joe's ability to break it down to brass tacks is pretty special :) When eventually the simple view and the Engineering bits underneath start to look similar, I think we'll be on the right path. Still some way to go with the engineering all around I think :) "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
Thanks Jason, Yeah, Cuda 50 VLAR takes 44-55 minutes depending on AR (3-5%core) OpenCL VLAR are 18-22 minutes (99.5-99.96%Core) But I don't mind since VLAR on the CPU is 2 hours. So a 20 minute crunch is a huge improvement I have not tried multiple Work Units on the GPUs with these since with the 8 core and 4 980s it's 52% total CPU utilization with 47% Kernal use The Intel chip is better since it's 16 core (8 physical and 8 virutal) and 4 Titans, is 26% total CPU Utilization and 24% Kernal use I remember what happen the last time Kernal went over 55% and CPU was near 90% (Memory Dump!!) so I'm not angist to see that again. Besides, at 20 minutes average, it's moving along skippingly. Zalster |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.