Should I leave cores or threads free for GPU?

Message boards : Number crunching : Should I leave cores or threads free for GPU?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · Next

AuthorMessage
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1965418 - Posted: 15 Nov 2018, 20:34:13 UTC - in response to Message 1965413.  
Last modified: 15 Nov 2018, 20:44:46 UTC

I assume you are asking about the GUI "use at most x% of the CPUs" option.
If so, then it's a very simple answer - this value only applies to CPUs dedicated to processing by a CPU application and does not take into account any CPU usage called for by a GPU. I know it can be a bit confusing, but BOINC has no control over what CPU usage a GPU demands.
Apparently some in this thread think the On Multiprocessor systems use at most ___ has some control over the GPUs usage of the CPU. It doesn't, the setting only controls the CPU Apps. The GPUs will use as much as they need no matter what the multiprocessor setting is set to. That's why my GPUs are using 50% CPU with the multiprocessor setting at 10%.

That's also why if you set the multiprocessor setting to 90%, the GPUs will use that remaining 10% if that's all that's available. It's been that way Long before any Max_concurrent settings were around.
ID: 1965418 · Report as offensive
Profile Gary Charpentier Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 25 Dec 00
Posts: 30683
Credit: 53,134,872
RAC: 32
United States
Message 1965426 - Posted: 15 Nov 2018, 22:31:27 UTC - in response to Message 1965418.  

I assume you are asking about the GUI "use at most x% of the CPUs" option.
If so, then it's a very simple answer - this value only applies to CPUs dedicated to processing by a CPU application and does not take into account any CPU usage called for by a GPU. I know it can be a bit confusing, but BOINC has no control over what CPU usage a GPU demands.
Apparently some in this thread think the On Multiprocessor systems use at most ___ has some control over the GPUs usage of the CPU. It doesn't, the setting only controls the CPU Apps. The GPUs will use as much as they need no matter what the multiprocessor setting is set to. That's why my GPUs are using 50% CPU with the multiprocessor setting at 10%.

That's also why if you set the multiprocessor setting to 90%, the GPUs will use that remaining 10% if that's all that's available. It's been that way Long before any Max_concurrent settings were around.

Yes and no. BOINC still adds up the fractions of a CPU the GPU tasks guess they will need and should not start more tasks than the total available. I don't know if that has been checked or if someone didn't make a mistake and do the calculation (or part of it) in integer math not floating point. I remember tasks that used 0.5 CPU (or more) + 1 GPU as counting as a full CPU.
ID: 1965426 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1965434 - Posted: 15 Nov 2018, 23:00:20 UTC - in response to Message 1965426.  

I assume you are asking about the GUI "use at most x% of the CPUs" option.
If so, then it's a very simple answer - this value only applies to CPUs dedicated to processing by a CPU application and does not take into account any CPU usage called for by a GPU. I know it can be a bit confusing, but BOINC has no control over what CPU usage a GPU demands.
Apparently some in this thread think the On Multiprocessor systems use at most ___ has some control over the GPUs usage of the CPU. It doesn't, the setting only controls the CPU Apps. The GPUs will use as much as they need no matter what the multiprocessor setting is set to. That's why my GPUs are using 50% CPU with the multiprocessor setting at 10%.

That's also why if you set the multiprocessor setting to 90%, the GPUs will use that remaining 10% if that's all that's available. It's been that way Long before any Max_concurrent settings were around.

Yes and no. BOINC still adds up the fractions of a CPU the GPU tasks guess they will need and should not start more tasks than the total available. I don't know if that has been checked or if someone didn't make a mistake and do the calculation (or part of it) in integer math not floating point. I remember tasks that used 0.5 CPU (or more) + 1 GPU as counting as a full CPU.

This is what I'm concerned with;
...The concept that reducing CPU usage to 90% frees up a core is wrong. The only thing you have done is reduced the number of threads that are processing the same amount of work. That "free" thread isn't going to be used by the GPUs. Because you specifically told BOINC that it isn't allowed to use the 10%. So the GPUs are going to be stealing cycles from the other threads. If you want the GPU to have a unused thread, it has to be able to use any non used thread...
This is not the way I've seen it work. The GPUs Will Use that 10% that you told the CPUs not to use. You told the CPUs not to use that 10%, Not the GPUs. Claiming you told BOINC not to use that 10% for the GPUs is Not Correct.

Which do you believe? It shouldn't take more than a couple minutes to test it, just as I've stated in my previous posts.
ID: 1965434 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1965435 - Posted: 15 Nov 2018, 23:02:01 UTC - in response to Message 1965416.  

...Right now I see my cpu utilization running at 57%. I see 8 cores out of 16 idling around 10-25% while 8 cores are pegged at 100%. That is the way I want my hosts to run. Cpu tasks pegged to a physical core and running flat out and the others loafing along supporting my desktop browsing and feeding my gpus.
That is Not the way modern Systems are designed. The OS deliberately spreads the work across all available cores. That is the way the developers meant modern Systems to run. If you think the Developers are wrong, then continue on with what you are doing. Right now my system shows all cores running about the same with normal variance. I think I will let it run the way the Developers programed it.

No that is the best way to run AMD processors which I run exclusively almost.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1965435 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1965442 - Posted: 15 Nov 2018, 23:26:22 UTC - in response to Message 1965435.  

...Right now I see my cpu utilization running at 57%. I see 8 cores out of 16 idling around 10-25% while 8 cores are pegged at 100%. That is the way I want my hosts to run. Cpu tasks pegged to a physical core and running flat out and the others loafing along supporting my desktop browsing and feeding my gpus.
That is Not the way modern Systems are designed. The OS deliberately spreads the work across all available cores. That is the way the developers meant modern Systems to run. If you think the Developers are wrong, then continue on with what you are doing. Right now my system shows all cores running about the same with normal variance. I think I will let it run the way the Developers programed it.

No that is the best way to run AMD processors which I run exclusively almost.

So, why don't the Systems have an AMD configuration? You know, "Click here and we will configure your system for your AMD processor", or something similar.
I've never seen it before, can you provide a link?
ID: 1965442 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1965446 - Posted: 15 Nov 2018, 23:57:11 UTC - in response to Message 1965442.  

Can you really believe that AMD is NOT a long lost thought to developers. They were ignored for 15 years as if AMD didn't exist. Any developer developed EXCLUSIVELY for Intel hardware as Intel had 90% of the market share.

Only until last year did AMD even register with developers when Ryzen came to market and started providing competition to Intel again. You are an obvious Intel fanboi. I am an AMD fanboi with twenty years of grudges against Intel for their monopolistic policies and flagrant disregard of court adjudications. I have absolutely no sympathy for Chipzilla.

So is it any wonder why programs and systems are not optimized for AMD? It is up to the AMD user to understand the product's differences from Intel architecture and make adjustments that are conducive to best AMD performance and optimizations.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1965446 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1965455 - Posted: 16 Nov 2018, 0:53:11 UTC - in response to Message 1965446.  

You are an obvious Intel fanboi....
Ha! I'm just a victim of circumstances . If Apple used AMD I would have an AMD Mac. If there was such a thing as an AMD Mining board that worked, I would have gone with that if it was much cheaper. Since there isn't any such thing as an AMD Hackintosh...well you get the picture. Not much I can do about that. Also, once you have a certain type CPU it's best to stay with that type, it makes swapping parts much easier. As to assigning certain cores to run at 100% while other cores are idle, well, again, if it were such a good idea the developers would have made it simple to do. There's probably downsides to it, the obvious one being it creates hot spots in the CPU with uneven heat distribution. Probably as hard on the CPU as overclocking, I dunno.
ID: 1965455 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1965467 - Posted: 16 Nov 2018, 1:33:43 UTC - in response to Message 1965455.  

I don't know if there can be 'hotspots'. The Ryzen 2700X is an 8 core cpu. So all cores of the silicon are in use all the time. So the entire die is at the same temperature mostly and any core temp fluctuation is only when a cpu task leaves before another replaces it. And AMD only reports just one cpu temp for the package any way so I am just guessing at any difference in individual core temps. I just have it set up so that any cpu task has exclusive use of the FPU register for crunching. The HT threads don't need any FPU resources since all they have to do is shovel work to the gpu.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1965467 · Report as offensive
Profile Gary Charpentier Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 25 Dec 00
Posts: 30683
Credit: 53,134,872
RAC: 32
United States
Message 1965468 - Posted: 16 Nov 2018, 1:36:14 UTC

As O/S schedulers has come up, they are night and day differences between Doze, Mac, and Linux. In the Mac when BOINC drops priority, it actually drops global priority. Doze also drops, but it does not have the granularity that Mac's BSD does. For Linux, you have to know what the compile time switches were set to when the Kernel was made. If it is set one way BOINC can't actually drop global priority! and that is the default for later kernels. Set another way it can. If it is a crunch only box, this may not matter, but it makes a big difference on a mixed use box.
ID: 1965468 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1965502 - Posted: 16 Nov 2018, 5:30:39 UTC
Last modified: 16 Nov 2018, 6:00:00 UTC

It's an easy test. If you are running CPU tasks, then suspend them so the only tasks running are GPU tasks.
Open whatever it is you use to monitor CPU usage and note the CPU usage. Now change the Use at most setting to 100%.
Did the CPU usage change?
Now change the Use at most setting to 1%. Did the CPU usage change?
They don't change on My machines, which is positive proof that setting has absolutely no affect on the GPU's CPU usage.
If you want to know how much CPU usage your GPUs need, Now would be a good time to observe it.

On another note, it appears there are a few people trying to pound the square peg into the round hole.
Yes, some people are trying to run the INTEL MAC OS on their AMD CPU. Of course the question would be Why?
Everything from the OS to the Apps was designed to run on Intels, hence the nomenclature 'INTEL Mac'.
So... This is the definition of a fanboi. It's the only reason I can think of why someone would try such a thing. Any normal person would use what the Platform was designed for.

Anyone know of an AMD based Mining board? I haven't seen any.
ID: 1965502 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1965505 - Posted: 16 Nov 2018, 5:48:53 UTC - in response to Message 1965467.  

I don't know if there can be 'hotspots'. The Ryzen 2700X is an 8 core cpu. So all cores of the silicon are in use all the time. So the entire die is at the same temperature mostly and any core temp fluctuation is only when a cpu task leaves before another replaces it. And AMD only reports just one cpu temp for the package any way so I am just guessing at any difference in individual core temps. I just have it set up so that any cpu task has exclusive use of the FPU register for crunching. The HT threads don't need any FPU resources since all they have to do is shovel work to the gpu.

That doesn't sound quite right to me. If you max out the CPU the temperature rises doesn't it? On Intels the Temperature is given on each core, and you can see it rise and fall depending on it's use. I think when I use to play around with locking a cpu task to a specific core the core in use would actually jump around to keep the temperature down. Maybe we should contact AMD and ask them, otherwise it's just speculation. What's not speculation is you are changing the default way the system was designed.
ID: 1965505 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22223
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1965510 - Posted: 16 Nov 2018, 6:07:40 UTC

Is it possible that AMD use a single core temp sensor, not the eight that Intel use?

It is almost impossible to design a multi core chip, with all the supporting functions than a Ryzen family has and not get hotspots, although looking at micrographs of the chips they appear to have made a pretty good job of it at some (low) core counts.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1965510 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1965519 - Posted: 16 Nov 2018, 6:43:15 UTC - in response to Message 1965502.  

I don't know what tangent you are running off on TBar. I don't run MACS. I only run PC's on X86 processors. I only have run my Seti preferences at 100% CPU at 100% of the time. Always have. No reason not to. To control the number of tasks I run I use the project_max_concurrent.

Instead of talking absolute nonsense about AMD architecture, why don't you read up on it. AMD cpus have ALWAYS had a SINGLE cpu die temperature sensor. For 2 cores or 32 cores. One sensor.

I have no interest in running a mining board as they only run Intel cpus. Intel cpu motherboards can have PLX chips to support multiple PCIe lanes. The one drawback and limitation in desktop AMD AM4 designs is the absence of any HEDT design with PLX chips. That is my biggest wish for AMD. But probably never going to happen since they offer a HEDT architecture with the Threadripper products.

My next build is a TR host so I can populate it with four gpus instead of the three card limit with the AM4 products.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1965519 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1965520 - Posted: 16 Nov 2018, 6:47:11 UTC - in response to Message 1965510.  

Is it possible that AMD use a single core temp sensor, not the eight that Intel use?

It is almost impossible to design a multi core chip, with all the supporting functions than a Ryzen family has and not get hotspots, although looking at micrographs of the chips they appear to have made a pretty good job of it at some (low) core counts.

YES, ALL AMD CPUS only have a SINGLE temp sensor to export temperatures to the outside world. Ryzens have over 20 internal temp sensors scattered about the die for feedback to the internal XFR and PBO2 overclocking algorithms in the cpu. Those sensors do not export externally and are unavailable to the user.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1965520 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1965521 - Posted: 16 Nov 2018, 6:56:43 UTC - in response to Message 1965519.  

I don't know what tangent you are running off on TBar. I don't run MACS. I only run PC's on X86 processors. I only have run my Seti preferences at 100% CPU at 100% of the time. Always have. No reason not to. To control the number of tasks I run I use the project_max_concurrent.

Instead of talking absolute nonsense about AMD architecture, why don't you read up on it. AMD cpus have ALWAYS had a SINGLE cpu die temperature sensor. For 2 cores or 32 cores. One sensor.

I have no interest in running a mining board as they only run Intel cpus. Intel cpu motherboards can have PLX chips to support multiple PCIe lanes. The one drawback and limitation in desktop AMD AM4 designs is the absence of any HEDT design with PLX chips. That is my biggest wish for AMD. But probably never going to happen since they offer a HEDT architecture with the Threadripper products.

My next build is a TR host so I can populate it with four gpus instead of the three card limit with the AM4 products.

Spoken as a True Fanboi.
I ran Windows for Years here before I was able to build the Mac Apps. I've also built a Few Linux Apps. So you see, I've run them All.
I believe it was the Windows x86 PCs that displayed the Loaded cores jump from one to the other. No, I haven't run any AMDs, but, if they only have a single temperature sensor then I'll stay away.
You mad bro?
ID: 1965521 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1965524 - Posted: 16 Nov 2018, 7:12:54 UTC - in response to Message 1965521.  

No, I haven't run any AMDs

Instead of speculating about what you think you know about AMD processors, please just educate yourself and read about them in the reviews of their architecture. Anandtech always does a great job explaining how AMD cpus work and how they are different designs compared to Intel.
https://www.anandtech.com/show/11170/the-amd-zen-and-ryzen-7-review-a-deep-dive-on-1800x-1700x-and-1700
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1965524 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1965533 - Posted: 16 Nov 2018, 8:39:17 UTC - in response to Message 1965524.  

No, I haven't run any AMDs

Instead of speculating about what you think you know about AMD processors, please just educate yourself and read about them in the reviews of their architecture. Anandtech always does a great job explaining how AMD cpus work and how they are different designs compared to Intel.
https://www.anandtech.com/show/11170/the-amd-zen-and-ryzen-7-review-a-deep-dive-on-1800x-1700x-and-1700

I'm kinda busy testing a new cuda app right now, and I really don't see any need. I don't have any AMDs, and I don't plan on buying any anytime soon. I just bought a Mining machine and only Intels were available, I needed a new Hackintosh and since it needs to run Intel code I got another Intel based board. I don't see any need for another PC for quite a while. The Mining machine is nice as it runs what would normally take 3 PCs to run, just what a person trying to cut down needs.
ID: 1965533 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 1965587 - Posted: 16 Nov 2018, 16:19:48 UTC - in response to Message 1965519.  
Last modified: 16 Nov 2018, 16:21:15 UTC

The one drawback and limitation in desktop AMD AM4 designs is the absence of any HEDT design with PLX chips. That is my biggest wish for AMD. But probably never going to happen since they offer a HEDT architecture with the Threadripper products.

My next build is a TR host so I can populate it with four gpus instead of the three card limit with the AM4 products.


i thought AM4 was by definition NOT HEDT? so i dont think you'll ever have HEDT on AM4 socket. thats what TR is for.

but speaking of PLX chips and added PCIe lanes. motherboard chipsets are "kind of" a PLX chip in that sense. they do add PCIe lanes, even on AM4 and TR4 platforms.

looks like the X370 and X399 will give you 8, and a B350 gives you 6. limited to gen2 speeds though. these are probably running things like M.2 or x1 slots though on these boards (just my guess, i havent actually checked).
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 1965587 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1965601 - Posted: 16 Nov 2018, 16:59:02 UTC - in response to Message 1965587.  

Yes I understand that AM4 is not an HEDT design. It was supposed to be the affordable reentry to the PC marketplace against the Intel marketing juggernaut. But you don't have to go to the complexity of Threadripper with quad channel memory and huge socket. Why can't you just add a PLX chip or redesign the AM4 chipset to support more PCIe lanes so there can be more PCIe slots on the existing AM4 socket motherboards.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1965601 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 1965607 - Posted: 16 Nov 2018, 17:07:58 UTC

yeah i hear ya.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 1965607 · Report as offensive
Previous · 1 · 2 · 3 · 4 · Next

Message boards : Number crunching : Should I leave cores or threads free for GPU?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.