Reports on experiments with a Dell Optiplex 7010 Microtower

Message boards : Number crunching : Reports on experiments with a Dell Optiplex 7010 Microtower
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · Next

AuthorMessage
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22160
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1879293 - Posted: 20 Jul 2017, 18:47:55 UTC

I then brought the GTX908

Should of course read:
I then brought the GTX980

Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1879293 · Report as offensive
Profile betreger Project Donor
Avatar

Send message
Joined: 29 Jun 99
Posts: 11360
Credit: 29,581,041
RAC: 66
United States
Message 1879308 - Posted: 20 Jul 2017, 19:52:42 UTC - in response to Message 1879224.  
Last modified: 20 Jul 2017, 20:19:18 UTC

When I first set up my system with three GTX1080 it had an i7, I ran it for a short time CPU-only, then brought the iGPU into play. Two things happened, the CPU got very hot, and the performance dropped very dramatically (CPU task times were more than doubled), even when I released a core for the iGPU to play with. The same happened with or without hyper-threading.
I then brought the GTX908 into play and went through the various CPU/GPU/iGPU combinations and all those with the iGPU in play were worse in terms of both CPU temperature and run times with it out of play (even the times on the GTX980 were badly affected). So I stopped using the iGPU for anything other than driving a monitor, and eventually gave up on that.

Besides poor output, in my experience a high percentage of my inconclusive results the wingman was iGPU so they put a small but extra load on our poor servers.
ID: 1879308 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1879366 - Posted: 21 Jul 2017, 4:34:20 UTC - in response to Message 1879308.  

Besides poor output, in my experience a high percentage of my inconclusive results the wingman was iGPU so they put a small but extra load on our poor servers.


I just set things so I won't be getting any new Intel gpu tasks. It may take a while to run out of them. Then the other cpu graphs should start climbing again.

Tom
A proud member of the OFA (Old Farts Association).
ID: 1879366 · Report as offensive
Kiska
Volunteer tester

Send message
Joined: 31 Mar 12
Posts: 302
Credit: 3,067,762
RAC: 0
Australia
Message 1879428 - Posted: 21 Jul 2017, 14:27:07 UTC

What you will notice when the iGPU is processing tasks, in an app called ThrottleStop, is that some cores maybe downclocking to keep within the TDP.
I noticed this when I was processing tasks with my i5-4210U, 2 tasks on the CPU + 1 iGPU task = Throttle hell
ID: 1879428 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1879521 - Posted: 21 Jul 2017, 22:34:20 UTC - in response to Message 1879428.  

What you will notice when the iGPU is processing tasks, in an app called ThrottleStop, is that some cores maybe downclocking to keep within the TDP.
I noticed this when I was processing tasks with my i5-4210U, 2 tasks on the CPU + 1 iGPU task = Throttle hell

Interesting. I could see your i5-4210U hitting it's 15W TDP limit and throttling fairly easily.
When I was testing with my i5-4670K I didn't observe any CPU or GPU throttling.

Something else that is interesting. My J1900 can run iGPU tasks without effecting the CPU run times. It is configured a little differently than a standard desktop CPU.
Where the standard desktop CPUs have a L3 cache shared by all CPU cores and the GPU. The J1900 has a 1MB cache for each pair of cores which the iGPU also uses.
I have a theory that the iGPUs with dedicated cache may not have the same effect on CPU run times. However I have not yet been able to obtain a CPU with such a iGPU for a reasonable amount to confirm my theory.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1879521 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1879770 - Posted: 23 Jul 2017, 0:01:49 UTC

Now that is "interesting". For the 2nd time my 7010 has "grown a core". At least as far as Boinc is concerned. I have a 4c/8t cpu (i7-3770) that I have just been experimenting with running the Intel GPU in parallel with the Gtx 750 TI I have on there.

I then shut down all the Intel GPU tasks. But both Citizen Science Group and Rosetta have been cheerfully running 8 tasks and the gpu has been running the 9th one. Now my app_config.xml file in the Seti directory has not mysteriously changed, it is still set for 1 cpu and 1 gpu.

Apparently Seti doesn't get it. It only runs 7 cpu cores plus 1 cpu devoted to the gpu.

I have posted a message about this "extra core" on the Citizen Science Group message area/website.

My speculation is, because I have not specifically disabled the Intel GPU, in fact it is currently still driving one of my monitors, something odd is going on around that.

Tom
A proud member of the OFA (Old Farts Association).
ID: 1879770 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1879791 - Posted: 23 Jul 2017, 0:30:32 UTC - in response to Message 1879770.  

it is still set for 1 cpu and 1 gpu.

What do you mean by that?

<app_config>
 <app>
  <name>setiathome_v8</name>
  <gpu_versions>
  <gpu_usage>1.00</gpu_usage>
  <cpu_usage>1.00</cpu_usage>
  </gpu_versions>
 </app>
</app_config>

If those are your settings, it's doing exactly as you've told it to do.
Save 1 CPU core for each GPU WU that is running.
Grant
Darwin NT
ID: 1879791 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1879822 - Posted: 23 Jul 2017, 2:20:43 UTC - in response to Message 1879791.  


<app_config>
 <app>
  <name>setiathome_v8</name>
  <gpu_versions>
  <gpu_usage>1.00</gpu_usage>
  <cpu_usage>1.00</cpu_usage>
  </gpu_versions>
 </app>
</app_config>

If those are your settings, it's doing exactly as you've told it to do.
Save 1 CPU core for each GPU WU that is running.


But I have 7 cpu tasks running and a gpu task running using 1 core. 7+1=8 (normal) Under CSG/Rossetti I have 8 cpu tasks running and a gpu task running using 1 core. 8+1=9 tasks. So I have "picked up a core" as far as Boinc is concerned. I did look at the task manager and it still says 8 cpus.
It only happens with those two projects. And I THINK it only happened once I had the Intel Gpu activated. I will be disabling the Intel Gpu as part of my gpu "how much do those cpu's draw" power draw testing in the near future. I expect but can't prove that this odd behavior will go away.

Tom
A proud member of the OFA (Old Farts Association).
ID: 1879822 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1879825 - Posted: 23 Jul 2017, 2:40:14 UTC - in response to Message 1879822.  

8+1=9 tasks. So I have "picked up a core" as far as Boinc is concerned.

No, what is happening is normal.
You have 8 CPU cores, and 1 GPU.
It is also what happens with Seti if you hadn't reserved a CPU core for GPU processing.

BOINC is doing exactly what you have told it to do. For Seti it reserves a CPU core for GPU work, for the other projects it doesn't. It's working exactly as you've told it to.
Grant
Darwin NT
ID: 1879825 · Report as offensive
dallasdawg

Send message
Joined: 19 Aug 99
Posts: 49
Credit: 142,692,438
RAC: 2
United States
Message 1879847 - Posted: 23 Jul 2017, 3:54:38 UTC - in response to Message 1878741.  

One of the reasons I wanted to experiment with a 7010 was it was a "low cost" turnkey system (no OS but I have a large supply of Win7 product codes so who cares?) which supports the AVX enhancements. Basically this means it runs cpu tasks a bunch faster than my other SSE4.2 machines. (Go Lunatics Beta6!)https://onedrive.live.com/?authkey=%21AFg76LeCY4GbPRk&id=8D83BF774A4A86F5%21979&cid=8D83BF774A4A86F5

So the basic setup was an i7-3770 (3.4Ghz+turbo, 4c/8t) with 8GB of ram, a 500GB HD, Dvd drive. I had a spare GTX 750Ti low profile that fit so I put that in, installed Win7, upgraded to SP1 and installed stock Boinc/Seti.

After getting my usual GTX 750Ti baseline of 15-20 minutes per SOG task and a brisk 2.5~ hours/Seti task on the gpus I upgraded to Lunatics beta6, installed the r3584 gpu app that has some additional benefits.

So got the cpu tasks down into the range of 1.7 hours~ and the gpu tasks dropped into the 8-15 minute range.

But I was ambitious. So I tried my first hardware experiment (next message).


I too have been experimenting with a Dell system for crunching. I have an Inspiron 3650 with i5-6400. System was given to me. I installed a GTX750ti, and converted it to Linux (that was interesting with Dell's version of secure boot). I run the GPU on one core and use only two other cores for CPU crunching due to staying with the 235W power supply. Keeping one core idle helped with CPU temps and power consumption. I'm running Anonymous with AVX for CPU and CUDA80 for GPU. CPU runs about 38W and the GPU about 32W. I am not using the iGPU.

So far, it has performed well. RAC is just under 20k. It will be interesting to see if the power supply can hold up with this constant load on it. If it dies, I am out nothing anyway.
ID: 1879847 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1879893 - Posted: 23 Jul 2017, 11:12:16 UTC - in response to Message 1879847.  

So far, it has performed well. RAC is just under 20k. It will be interesting to see if the power supply can hold up with this constant load on it. If it dies, I am out nothing anyway.


I don't expect your PS to die but I have priced replacement PS's for my Dell at $35ish on ebay. It is possible yours have similar prices.

Tom
A proud member of the OFA (Old Farts Association).
ID: 1879893 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1879900 - Posted: 23 Jul 2017, 12:37:55 UTC - in response to Message 1879825.  

It is also what happens with Seti if you hadn't reserved a CPU core for GPU processing.

BOINC is doing exactly what you have told it to do. For Seti it reserves a CPU core for GPU work, for the other projects it doesn't. It's working exactly as you've told it to.


So it sounds like I need to go ahead and put an app_config.xml in each project telling it to use one cpu for one gpu. Otherwise I won't be getting the full "you really need 1 core for each SOG gpu task" when I am not running Seti on the cpu's?

Tom
A proud member of the OFA (Old Farts Association).
ID: 1879900 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1879902 - Posted: 23 Jul 2017, 12:58:03 UTC - in response to Message 1879900.  

So it sounds like I need to go ahead and put an app_config.xml in each project telling it to use one cpu for one gpu. Otherwise I won't be getting the full "you really need 1 core for each SOG gpu task" when I am not running Seti on the cpu's?


I just tried that. While I can constraint the total number cpu tasks running on a project apparently I can't require each project to dedicate a core to my Seti gpu task when the non-Seti cpu task is running?

<app_config>
<app>
<name>minirosetta</name>
<gpu_versions>
<gpu_usage>1.0</gpu_usage>
<cpu_usage>1.0</cpu_usage>
</gpu_versions>
</app>
</app_config>


What am I missing?

Tom
A proud member of the OFA (Old Farts Association).
ID: 1879902 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1879924 - Posted: 23 Jul 2017, 16:12:54 UTC - in response to Message 1879902.  

So it sounds like I need to go ahead and put an app_config.xml in each project telling it to use one cpu for one gpu. Otherwise I won't be getting the full "you really need 1 core for each SOG gpu task" when I am not running Seti on the cpu's?


I just tried that. While I can constraint the total number cpu tasks running on a project apparently I can't require each project to dedicate a core to my Seti gpu task when the non-Seti cpu task is running?

<app_config>
<app>
<name>minirosetta</name>
<gpu_versions>
<gpu_usage>1.0</gpu_usage>
<cpu_usage>1.0</cpu_usage>
</gpu_versions>
</app>
</app_config>


What am I missing?

Tom


Since Rosetta doesn't have a GPU application this app_config will do nothing.

In BOINC Manger does it say Running (1 CPUs + 1 Nvidia GPUs) for the GPU task?
Are you certain you actually have 8 CPU task running while the GPU task is active & that none of them say waiting to run?
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1879924 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1880032 - Posted: 24 Jul 2017, 6:52:59 UTC - in response to Message 1879924.  
Last modified: 24 Jul 2017, 6:53:43 UTC

In BOINC Manger does it say Running (1 CPUs + 1 Nvidia GPUs) for the GPU task?


Yes

Are you certain you actually have 8 CPU task running while the GPU task is active & that none of them say waiting to run?


That is why it felt so unusual. I could count 9 "running" on an 8 core machine where the gpu task said "(1 CPUs + 1 Nvidia GPUs)".
As far as I can tell, there wasn't any decrease in processing speed that say a (0.33 + 1 Nvidia GPUs) might have implied. Certainly I have experienced 8 cpu cores plus a gpu task being run within the 8 cores so I got 9 tasks. But previously it always said some cpu # smaller than one to get that way.

I think I have disabled the Intel GPU (at least no driver for it was loading) and the situation didn't change. Just now with a mix of cpu tasks including 4 Seti@home tasks, the listing went back to a total of 8 "running" including the gpu task.

Tom
A proud member of the OFA (Old Farts Association).
ID: 1880032 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22160
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1880034 - Posted: 24 Jul 2017, 7:30:31 UTC

When a GPU task reports as running "1 x GPU + 1 x CPU" these are target maxima, particularly the CPU usage, not requirements. Thus, if the GPU process only needs a small fraction of the 1 CPU it will only take what it needs leaving the rest of that CPU (core) available for other use. And hence you will are getting 8 CPU tasks running, plus 1 GPU tasks.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1880034 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1880066 - Posted: 24 Jul 2017, 15:05:21 UTC

Tom,

I think I've asked you this before but how are you monitoring CPU usage of each work unit?
ID: 1880066 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1880213 - Posted: 25 Jul 2017, 2:10:34 UTC - in response to Message 1880066.  

Tom,

I think I've asked you this before but how are you monitoring CPU usage of each work unit?


I think you pointed me at SIV? I think I got it downloaded and then got distracted.

Since one of my primary interests is safely getting a Gtx 1060 3GB to live on my Dell 7010 that was why I was focusing on the power draw for each of the available gpus I have.

I will get the SIV installed and start trying to make heads or tales of things. I'm a little distracted today by the 16c/32t monster that I ordered showing up. Its been a learning experience.

:)

Tom
A proud member of the OFA (Old Farts Association).
ID: 1880213 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1880255 - Posted: 25 Jul 2017, 9:57:30 UTC - in response to Message 1880066.  

Tom,

I think I've asked you this before but how are you monitoring CPU usage of each work unit?


Could you explain exactly what you "mean" by "monitoring CPU usage of each work unit"?

I installed SVI but am at a loss to try to understand how what I think I can see it is monitoring applies to "...."?

Thanks,
Tom
A proud member of the OFA (Old Farts Association).
ID: 1880255 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1880275 - Posted: 25 Jul 2017, 22:47:21 UTC - in response to Message 1880255.  
Last modified: 25 Jul 2017, 23:03:13 UTC

I also use BOINCtask which I don't know if I ever mention that before or not. I tend to use them both to keep an eye on what my system is doing. While i have BOINC I don't really use it much other than just when I first boot up.

When you are looking at SIV it should give you readout of all your CPUs and what percentage of each core is being used.



As you can see from this image, if I move the cursor over a core. I can see what the current usage is and how much of the core is in use. I can also see the max and min when it's being used.

I find that unless I use a -cpu_lock in a command line somewhere, that cpu cores will be shared across different apps and possibly different projects. As you can use, even though I have 12 work units running, almost all the cores have some usage.

Now, if I were to use -cpu_lock, then that core would be solely be used by 1 work unit and none of the others.

So you can see from my example that all the cores are splitting the work between them (ie 12 work units spread across 20 cores) yet not 100% each core. The total CPU usage of the 20 cores is 62%.

I find that BOINC doesn't give me a good enough picture of what is really going on.

Edit..

Sorry, so what I'm getting at is you say 8 apps are using 8 of your cores. Well, if you only ran the CPU units and not the GPU, you could see if 100% of all 8 cores are actually in use or if say 91% of all 8 were being used. Thus 9% of 8 cores might be unused. If that was the case, when you start up the GPU work unit. It see those unused CPU core counts and attaches itself to them. Thereby you have 8 CPU work units and 1 GPU work unit running.
ID: 1880275 · Report as offensive
Previous · 1 · 2 · 3 · Next

Message boards : Number crunching : Reports on experiments with a Dell Optiplex 7010 Microtower


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.