Message boards :
Number crunching :
Freeing CPU cores
Message board moderation
Previous · 1 · 2 · 3 · 4 · Next
Author | Message |
---|---|
_ Send message Joined: 15 Nov 12 Posts: 299 Credit: 9,037,618 RAC: 0 |
try running your nvidia card with a free core for a few WU cycles see if it improves(decreases) your GPU WU completion times That is an interesting experiment, I'll try it out thanks! |
cov_route Send message Joined: 13 Sep 12 Posts: 342 Credit: 10,270,618 RAC: 0 |
Did you try the same for AP and failed or just didn't try ? I have tried tying AP to a specific core and it didn't work like it does for MB. With the new driver I should try it again to see if anything has changed. I also noticed it didn't work with Einstein, I had to free cores for that one too. |
Wiggo Send message Joined: 24 Jan 00 Posts: 34744 Credit: 261,360,520 RAC: 489 |
OK, I'll give a few details that I found out with my rigs with nvidia cards. My rigs only do MultiBeam on their video cards, but here goes. While my Q6600 was running 2x GTX550Ti's and each being fed 2 workunits each under SETI V6 no cores needed to be reserved. SETI V7 maybe another story, with the extra load it adds. Once I replaced the 550's with 2x GTX660Ti's being fed 3 workunits each I had to reserve a core on the Q6600 to get the best productivity out of the rig. My 2500K now has the GTX550Ti's (replacing 3x 9800GT's, no freed core here either), but even under SETI V7 reserving a core on it reduces overall productivity of the rig. Now I'm about to swap the Q6600 for a 3570K this weekend (under SETI, MB or AP, my 2500K is 4.2x more productive than the Q6600 which is overclocked to 3GHz) which I'll run for the 1st couple of weeks still with the freed core, but then I'll see what happens using all cores as well (I'll report back on how this works out somewhere around here). Now if I ever did decide to run AP's on my cards it would only be 1 per card and a freed core for each (I may do this 1 day, but don't hold your breath waiting). Cheers. |
James Sotherden Send message Joined: 16 May 99 Posts: 10436 Credit: 110,373,059 RAC: 54 |
Im waiting for my two I7 3770 rigs with a 550 Ti each to stabilize in rac. Im running lunatics apps with open cl no overclocking. Both are runny 8 cores with 1 WU per gpu. When they flatine on rac I will free a core up and see what happens. [/quote] Old James |
Wiggo Send message Joined: 24 Jan 00 Posts: 34744 Credit: 261,360,520 RAC: 489 |
Im waiting for my two I7 3770 rigs with a 550 Ti each to stabilize in rac. Im running lunatics apps with open cl no overclocking. Both are runny 8 cores with 1 WU per gpu. When they flatine on rac I will free a core up and see what happens. You'll go backwards doing that I bet (1x GTX550Ti, even running 2 workunits, would not put enough strain on those CPU's to warrant a free core). Cheers. |
James Sotherden Send message Joined: 16 May 99 Posts: 10436 Credit: 110,373,059 RAC: 54 |
Im waiting for my two I7 3770 rigs with a 550 Ti each to stabilize in rac. Im running lunatics apps with open cl no overclocking. Both are runny 8 cores with 1 WU per gpu. When they flatine on rac I will free a core up and see what happens. Thats what id like to find out. I know everybodys milage varies. I want to see for my self. I also suspect going to just 4 cores will drop rac also. [/quote] Old James |
Wiggo Send message Joined: 24 Jan 00 Posts: 34744 Credit: 261,360,520 RAC: 489 |
Im waiting for my two I7 3770 rigs with a 550 Ti each to stabilize in rac. Im running lunatics apps with open cl no overclocking. Both are runny 8 cores with 1 WU per gpu. When they flatine on rac I will free a core up and see what happens. Normally Hyperthreading reportedly adds around a 30-40% performance increase over having it turned off, but I can't comment personally as the last time I had a Hyperthreaded CPU was years back now with a P4 2.4C (overclocked to 3.2GHz). Cheers. |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
Did you try the same for AP and failed or just didn't try ? Could you explain this little more please. I mean that same switches should can be used for AP and restrict usable cores for each AP app instance. Saying that it doesn't work same for AP did you mean that it get CPU cores restriction, but it doesn't help with performance? Or these switches just don't work, i.e., don't restrict usable CPU cores for app instance? Einstein app doesn't support these switches at all, so I didn't understand your message in whole. SETI apps news We're not gonna fight them. We're gonna transcend them. |
_ Send message Joined: 15 Nov 12 Posts: 299 Credit: 9,037,618 RAC: 0 |
try running your nvidia card with a free core for a few WU cycles see if it improves(decreases) your GPU WU completion times Hm, I went to my computing options and set the max amount of processor % to 75% (I am on 4 cores). My GPU task did not change from using more than 0.04 CPUS, even after shutting BOINC down and starting it back up. I am missing something, perhaps? |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
try running your nvidia card with a free core for a few WU cycles see if it improves(decreases) your GPU WU completion times That 0.04 CPUs is what Boinc uses for scheduling decisions, add enough of them up, so it's over 1, then Boinc will run one less CPU task, The freeing a core just means there are less resources being used, so GPU usage improves (Use GPU-Z or SIV to monitor), the task CPU usage fraction won't change. Claggy |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Claggy/Williams/Others Maybe if some one could compile all the information about free cores and core usage AMD/Intel ATI/NV and put on a some source of FAQ could avoid a lot of questions and give a gide to follow by the community. Something like how to do and what you need to know, how many cores do you need to free if you have ATI/NV and/or AMD or Intel Cpu´s, etc. Just a sugestion... I know you are all busy persons... and it´s a big task. |
_ Send message Joined: 15 Nov 12 Posts: 299 Credit: 9,037,618 RAC: 0 |
Thanks for your answer Claggy. Just when you think you have a handle on this stuff.... you find out there is a whole other area of crunching to consider. |
_ Send message Joined: 15 Nov 12 Posts: 299 Credit: 9,037,618 RAC: 0 |
A question about GPU-Z, if anyone is familiar with it. On the sensors tab, is it the "GPU LOAD" % the statistic I am interested in? And maybe this is obvious, would an ideal % be as close to 100% as you can get it? |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Will give you an example on some unknown host... 1 WU - 65% 2 WU - 98% 3 WU - 98% GPU Usage ... The first number that put you close to 100 is the better (more WU x less time), in this case 2 WU at a time, more WU just make you loose time on overhead, even if you GPU could run more WU at a time due it´s RAM capacity. I never see a host with 100% (normaly 97-99 is OK). |
_ Send message Joined: 15 Nov 12 Posts: 299 Credit: 9,037,618 RAC: 0 |
Will give you an example on some unknown host... Thank you for the info! I was warned once, with my current computers, not to attempt more than 1 GPU WU at a time. Do you have an idea why this might be? GPU-Z tells me that my GPU % is hardly breaking 10% on my little laptop. EDIT: Sorry, I must have been looking at the wrong information. I am hovering at about 75% usage on the GPU. |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Will give you an example on some unknown host... First, multiple WU at a time only on Fermi´s & UP GPU´s. And my advice, unless you have some extra colling device, not crunch more than 1 WU at a time on a laptop, normaly laptops are not made to heavely use their GPU´s, crunching makes them run to hot. It will work but sure for not a long time. Your laptops have a NVS 4200M not sure but i belive it´s a pre-Fermi GPU, so you can´t run more than 1 WU at a time. It´s hard to say why you get only 10% of GPU usage, my bet is in some power saving setting feature normaly used by the laptop makers to save batery and/or avoid overheat. Look at the energy usage on the control pannel that maybe give you a clue. Try to change from economy to high performance and see if it help. But remember, not push to hard a laptop, heat is their enemy, and GPU crunch produces a lot of heat. |
_ Send message Joined: 15 Nov 12 Posts: 299 Credit: 9,037,618 RAC: 0 |
Will give you an example on some unknown host... That is good advice, thanks for the info. I edited my last post, but will say again: I must have been looking at the wrong information. Seems when I actually select my GPU in GPU-Z instead of the Intel Graphics Family choice, I am running pretty high in the GPU %. For my desktop rig, I will have to do some investigation with GPU-Z, especially when I get my new AMD 7700 running hopefully this weekend. I appreciate the dialog, Thanks again! |
cov_route Send message Joined: 13 Sep 12 Posts: 342 Credit: 10,270,618 RAC: 0 |
Could you explain this little more please. I'm talking about the computing performance of the apps, not the functionality of the switches. With Einstein I used a script to tie it to a single core, I found it didn't allow full usage of the GPU, I had to free a core. Same thing with astropulse...BUT...since my last post I added the switches to the AP command line file to tie it to a single core and now it works: I am getting full GPU usage without freeing a core. That may be just luck due to the nature of the work units, or it may be due to some change in the driver with 13.8. I will let it continue to run to see if it is reliable. I'm not running Einstein right now so I don't know if the situation has changed with that app. I will try that when I get time. |
Tazz Send message Joined: 5 Oct 99 Posts: 137 Credit: 34,342,390 RAC: 0 |
<Thinking out loud here> I have TThrottle running to keep track of my temps and throttle them down if need be. I've read that TThrottle's more effective/efficient at, um what's the words I'm looking for, CPU scheduling?. Would it be the same thing to set my BOINC preferences to allow 100% and use TThrottle to set the Max CPU% to 87.5 (for an eight core CPU)? It would be running 8 tasks on 87.5% of 8 cores, or 8 tasks on 7 cores. I would guess it would cause an increase in crunching times for the CPUs, but it still would have a free 'core' for the GPU(s). Right? </Thinking out loud here> </Tazz> |
cov_route Send message Joined: 13 Sep 12 Posts: 342 Credit: 10,270,618 RAC: 0 |
It would be running 8 tasks on 87.5% of 8 cores, or 8 tasks on 7 cores. I would guess it would cause an increase in crunching times for the CPUs, but it still would have a free 'core' for the GPU(s). Right? That might work, I don't know all about TThrottle. But you would end up running more than one task per core which will give worse results than running fewer tasks, one per core. The reason is cache destructuring on your CPU. One job per core lets the core's dedicated cache set itself to serve the one task's memory locations. If you have more than one task per core, the cache will be constantly switching between the setups required for the two tasks which will involve access to main memory, a much slower operation. The OS scheduler usually tries to keep the same jobs on the same cores across time slices for just this reason. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.