Intel GPU and CPU at once

Message boards : Number crunching : Intel GPU and CPU at once
Message board moderation

To post messages, you must log in.

AuthorMessage
Peter Hucker
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 39
Credit: 7,555,481
RAC: 0
United Kingdom
Message 1937069 - Posted: 25 May 2018, 11:09:45 UTC

Just noticed something - if you're using an Intel GPU (built into the CPU), then it runs much slower (a fifth of the speed) if the CPU cores are fully utilised. So if you're running GPU and CPU projects on it, you need to reduce the number of CPU cores you give to Boinc.
ID: 1937069 · Report as offensive
Peter Hucker
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 39
Credit: 7,555,481
RAC: 0
United Kingdom
Message 1937071 - Posted: 25 May 2018, 11:20:53 UTC - in response to Message 1937069.  

Following on from this, is there any point? Does an Intel GPU do more work than an Intel CPU core?
ID: 1937071 · Report as offensive
Peter Hucker
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 39
Credit: 7,555,481
RAC: 0
United Kingdom
Message 1937073 - Posted: 25 May 2018, 11:30:08 UTC - in response to Message 1937069.  

Answered my own question by running Einstein's LATeah on both. The GPU core is 4 times faster than 1 CPU core on an i5-3570K. Probably even more of a difference on the 8th generation chips, as their graphics is twice as fast, but a core is only 1.5 times faster.
ID: 1937073 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1937074 - Posted: 25 May 2018, 11:35:08 UTC - in response to Message 1937073.  
Last modified: 25 May 2018, 11:35:28 UTC

Answered my own question by running Einstein's LATeah on both. The GPU core is 4 times faster than 1 CPU core on an i5-3570K. Probably even more of a difference on the 8th generation chips, as their graphics is twice as fast, but a core is only 1.5 times faster.

It depends from app/data/model.

For netbook on new Atoms iGPU part much faster, for desktop processors - quite different results.
Benching of particular model required no single recipe here.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1937074 · Report as offensive
Peter Hucker
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 39
Credit: 7,555,481
RAC: 0
United Kingdom
Message 1937075 - Posted: 25 May 2018, 11:47:20 UTC - in response to Message 1937074.  

I've only tried the Einstein project. Have you compared SETI?
ID: 1937075 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14649
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1937076 - Posted: 25 May 2018, 11:50:57 UTC - in response to Message 1937075.  
Last modified: 25 May 2018, 12:00:06 UTC

Copying my answer from the BOINC message boards.

Einstein is a particularly curious case. Their intel_gpu app does, as you say, slow down dramatically if all CPU cores are loaded with BOINC applications. But you can bypass that by using a program like Process Lasso to peg the iGPU app - NOT any of the others - to real-time process priority. I find there's a tiny stutter every 11 minutes or so, as one task finishes and the next starts up, but no other detrimental effect on the usability of the machine.

However, you may find that the CPU apps - especially, heavily optimised floating point apps like SETI and perhaps Einstein (untested) - run slower in that configuration. Lighter weight, and primarily integer, CPU apps suffer less.

Real-time process priority is NOT recommended for general use, but in this case it helps. Explore with care.
ID: 1937076 · Report as offensive
Peter Hucker
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 39
Credit: 7,555,481
RAC: 0
United Kingdom
Message 1937077 - Posted: 25 May 2018, 11:58:36 UTC - in response to Message 1937076.  
Last modified: 25 May 2018, 12:00:31 UTC

I shall leave it alone, as I run quite a few projects, and chips can change task quite often. I also have some proper graphics cards running aswell on half the machines, and some of those tasks need a lot of CPU assistance. So for me I think the best thing is to make sure the CPU usage stays away from 100% by taking a core or two off in the Boinc settings, so no GPU is ever slowed down.

Didn't get an email notification of your post in the Boinc message board by the way. But I did for this one.
ID: 1937077 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1937139 - Posted: 25 May 2018, 22:15:36 UTC - in response to Message 1937071.  

Following on from this, is there any point? Does an Intel GPU do more work than an Intel CPU core?

It would depend on the project.
For Seti, you're better off not using the iGPU as it reduces the output of the CPU cores significantly in most systems (some of the 2 core, low clock speed systems will get more work with the iGPU, but the majority of CPUs these days are more than 2 cores & have clock speeds over 2.4GHz). The reduction in CPU output is so great that by not using the iGPU, your CPU will produce more work than is lost by not running work on the iGPU. The shared power limits, thermal limits & caches make using the iGPU for Seti not worth it.
Grant
Darwin NT
ID: 1937139 · Report as offensive
Peter Hucker
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 39
Credit: 7,555,481
RAC: 0
United Kingdom
Message 1937149 - Posted: 25 May 2018, 22:48:01 UTC - in response to Message 1937139.  
Last modified: 25 May 2018, 22:49:29 UTC

That's not what I've observed with a 4 core i5-3570K and a 6 core i5-8600K, using Asteroids on the CPU cores and Einstein on the internal GPU core. If I run all CPU cores and the GPU core at once, the GPU core slows to 20% speed, and the CPU cores still run at full speed. If I free up one CPU core, everything runs full speed. And the GPU core is doing more work than the disabled CPU core was. I would have thought the same was true on SETI for all cores. Just disable one of the CPU cores. You should get full power on what's left, and GPU is always better than CPU. I set boinc preferences to "always use 75% of CPU cores" (on a 4 core CPU).
ID: 1937149 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1937151 - Posted: 25 May 2018, 22:51:57 UTC - in response to Message 1937149.  
Last modified: 25 May 2018, 22:52:29 UTC

That's not what I've observed with a 4 core i5-3570K and a 6 core i5-8600K, using Asteroids on the CPU cores and Einstein on the internal GPU core. If I run all CPU cores and the GPU core at once, the GPU core slows to 20% speed, and the CPU cores still run at full speed. If I free up one CPU core, everything runs full speed. And the GPU core is doing more work than the disabled CPU core was. I would have thought the same was true on SETI for all cores. Just disable one of the CPU cores. You should get full power on what's left, and GPU is always better than CPU. I set boinc preferences to "always use 75% of CPU cores" (on a 4 core CPU).

I did say it would depend on the project.
What I posted was for Seti, which I'm sure I also mentioned.
And running multiple projects at the same time on different hardware will result in even more different performance results.
Grant
Darwin NT
ID: 1937151 · Report as offensive
Peter Hucker
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 39
Credit: 7,555,481
RAC: 0
United Kingdom
Message 1937156 - Posted: 25 May 2018, 23:14:21 UTC - in response to Message 1937151.  

Strange, as I would have thought all projects would run equally better on GPU than CPU, especially if they're both SP FP, which Einstein and SETI are. I guess we just all experiment and find what works best on our rigs with our projects.
ID: 1937156 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1937160 - Posted: 25 May 2018, 23:29:58 UTC - in response to Message 1937156.  
Last modified: 25 May 2018, 23:30:57 UTC

Strange, as I would have thought all projects would run equally better on GPU than CPU, especially if they're both SP FP, which Einstein and SETI are.

It all depends on how well optimised the applications are. Extremely well optimised GPU application, poor CPU application then the iGPU will give more work. Highly optimised CPU application, poorly optimised GPU application, better not to use the iGPU. Both well (or poorly) optimised CPU & GPU application, then it will be a crap shoot which is best. And the type of work done (here in Seti there are many different types of WUs as far as processing is concerned) will have a big impact on which application gives the best results.
An iGPU shares the same thermal limits & caches as the CPU. The more optimised the applications, the sooner those limits will be reached. And the mix of CPU/iGPU usage that gives the best output within those limits will vary, hugely.
For Seti, with present multicore, high clock speed hardware, using the iGPU will result in a significant drop in work done per hour. The more cores, the faster the clocks, the bigger the hit will be if you use your iGPU.

I guess we just all experiment and find what works best on our rigs with our projects.

Yep.
Grant
Darwin NT
ID: 1937160 · Report as offensive
Peter Hucker
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 39
Credit: 7,555,481
RAC: 0
United Kingdom
Message 1937274 - Posted: 26 May 2018, 15:07:11 UTC - in response to Message 1937160.  

I've checked into it some more, analysing my 5 computers with different CPUs. It seems some built in graphics are more powerful compared to their CPU cores, and some aren't. And some slow down more than others when the CPU cores are fully used. And some use the CPU core at the same time for some reason (different instruction sets for newer GPU and CPU cores causing the WU to run on both at once? - For example Asteroids on my i5-3570K with Einstein on it's GPU uses no CPU time for the GPU task. Do the same on my i5-8600K, and the GPU task uses a whole CPU core at the same time.) So it depends a lot on the chip. I've tried different projects on each without much difference, the main thing appears to be the design of the CPU. So I've set them all up differently to get the most work out of them :-)
ID: 1937274 · Report as offensive
crollack

Send message
Joined: 19 Aug 07
Posts: 1
Credit: 35,630,708
RAC: 50
United States
Message 1937709 - Posted: 30 May 2018, 21:16:01 UTC

I have an I7-3770 @3.4GHz, 8GB ram, and a GeForce GTX 1060 w/3GB ram running Linux Mint 18.3. How can I tell if all the hardware is being utilize at maximum capability? if it is not, how can I enable it? Than for your help!
ID: 1937709 · Report as offensive
Peter Hucker
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 39
Credit: 7,555,481
RAC: 0
United Kingdom
Message 1937710 - Posted: 30 May 2018, 21:27:24 UTC - in response to Message 1937709.  

I have an I7-3770 @3.4GHz, 8GB ram, and a GeForce GTX 1060 w/3GB ram running Linux Mint 18.3. How can I tell if all the hardware is being utilize at maximum capability? if it is not, how can I enable it? Than for your help!


The most powerful chip in your setup is the GeForce. So you should make sure that's running flat out first. Stop the CPU tasks and see if the graphics card speeds up. If it does then you're doing too much CPU work.

I like to make the CPU run at about 80% (according to Windows Task Manager, I assume Linux has an equivalent). Then I'm sure it can always support the GPU work units without the graphics card sitting idle.

Some people run more than one task on the GPU, and you can sometimes squeeze extra power out of it, but I've found you only get a little bit, and it seems to tail off after a while, so I think it might be making the GPU tired? So I've gone back to just running one task per GPU.
ID: 1937710 · Report as offensive
Profile iwazaru
Volunteer tester
Avatar

Send message
Joined: 31 Oct 99
Posts: 173
Credit: 509,430
RAC: 0
Greece
Message 1937718 - Posted: 30 May 2018, 21:49:37 UTC
Last modified: 30 May 2018, 21:49:59 UTC

I've been playing around too these past couple of months and have found HWMonitor immensely helpful.
Only program I know were u can see CPU, iGPU and nVidia clocks & temps & loads, all on the same page (instead of tabs)

https://www.cpuid.com/softwares/hwmonitor.html

(Same guys make CPU-Z)
ID: 1937718 · Report as offensive

Message boards : Number crunching : Intel GPU and CPU at once


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.