Reports on experiments with a Dell Optiplex 7010 Microtower

Message boards : Number crunching : Reports on experiments with a Dell Optiplex 7010 Microtower
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3

AuthorMessage
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1880282 - Posted: 25 Jul 2017, 23:48:40 UTC

I echo almost exactly Zalster's use of SIV for system, GPU and CPU monitoring and BoincTasks for task monitoring. Between them both I know what is going on on each system at any one moment. The two apps are indispensable in my opinion for serious crunchers working under Windows. I miss them both on my new linux cruncher.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1880282 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1880977 - Posted: 29 Jul 2017, 16:40:01 UTC
Last modified: 29 Jul 2017, 16:41:59 UTC

I am just waiting to see if I can shoehorn the Gtx 1060 3GB power draw into my Dell Optiplex 7010. It turns out that I have a "stock" PS of 275 watts so "in theory" a Gtx 1060 3GB should run. Plus I am "watching" a 300 watt native to the 7010 on ebay trying to decide if I should pick it up.

All of which may make shambles of my theory that I overloaded and burned out my "original" 7010 motherboard. Instead maybe it was "feeling old"? After all, all this is "used" PC hardware I am playing with except the (usually) the higher end video cards. Since I also had a 500 watt aftermarket PS in there, who knows who the "real" culprit was?

Now that I have my "kill a watt" meter found I hope to get an early "is this a good idea or bad idea" reading. If it is a bad idea, I have another machine I can migrate the new gtx 1060 to.

I really want to "prove" that you can safely run a Gtx 1060 3GB video card in a "stock" Dell Optiplex 7010 because that would provide you with a pretty fast Avx-based cpu processor (Lunatics) as well as a fast gpu process for Seti, all on a turnkey box for $250 plus the cost (which is still bouncing around) of a Gtx 1060 3GB video card. (Maybe ballpark of $500 plus an OS).

Don't get me wrong. My Gtx 750 Ti under the Lunatics Beta6 distro is pumping out a standard SOG type gpu task in 18 minutes or less. But my Gtx 1060 3GB in another box routinely starts at 12 minutes and often does it in 8 or less. Not to mention the odd 4 minute task. And that is under stock Seti. So I would like to my 7010 more competitive with my bigger box :)

Tom
A proud member of the OFA (Old Farts Association).
ID: 1880977 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1881584 - Posted: 2 Aug 2017, 3:01:51 UTC - in response to Message 1880977.  

Now that I have my "kill a watt" meter found I hope to get an early "is this a good idea or bad idea" reading..


The Gtx 1060 3GB "mini" (short card, 1 fan) came in today. Installed it about 2pm local time.

Had to remember to open up the -sbs to 1024 from 512. Tried it with -period_iterations_num 1 and 4. Its still a little "laggy" so just bumped it to 10.

The "Kill a Watt" meter says that with non-Seti tasks running on the cpu plus Gtx 1060 gpu it is drawing under 200 watts. With an "all Seti" bunch of tasks it goes as high as 221 watts. This is on a 275 watt PS.

And when I try to use 2 monitors, after a while, they both "go away". So I have to reboot the system. I am still not sure if I saw a momentary spike of 700 watts or not.

On the strength of the current results I have gone ahead and ordered the 300 watt PS since it is both cheap and would (in theory at least) put me in compliance with the "recommended" 300 watt PS.

I will try out the 2 monitor experiment some more tomorrow but it may require that I wait for the PS upgrade before I will be able to count on having 2 monitors.

So far, pretty good.

The other experiment I have started is to see where the RAC settles at for a Gpu only Seti. It will take a while, I have to let the cpu tasks drain before the changeover will make a difference. This will allow me to compare the RAC's of a stock system on another box with the Lunatics version on this box.

Tom
A proud member of the OFA (Old Farts Association).
ID: 1881584 · Report as offensive
Ianab
Volunteer tester

Send message
Joined: 11 Jun 08
Posts: 732
Credit: 20,635,586
RAC: 5
New Zealand
Message 1882001 - Posted: 4 Aug 2017, 7:35:05 UTC - in response to Message 1881584.  

I would guess you are overloading the PSU as well.

Reason is that the 275w rating is for the combined power from the 12v, 5v and 3.3v rails. But in a modern machine most of the power is sucked from the 12v rail only. That powers the CPU and GPU via the onboard regulators. It doesn't matter if the 5 and 3.3 are only part used buy the RAM and motherboard accessories etc, if the 12v rail is buckling under the load, you crash, even if you have spare amps on the other rails.

If the system is built right it "should" power a PCIe card that's powered from the bus only, but I think that drops you back to a 1050 style GPU. The PCIe standard says it should be able to provide "x" amps via the expansion slot. So the machine "should" be able to supply that. But if you add a card that needs an aux power connector, that's extra power that needs to come from some place.
ID: 1882001 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1882059 - Posted: 4 Aug 2017, 14:54:27 UTC - in response to Message 1882001.  

If the system is built right it "should" power a PCIe card that's powered from the bus only, but I think that drops you back to a 1050 style GPU. The PCIe standard says it should be able to provide "x" amps via the expansion slot. So the machine "should" be able to supply that. But if you add a card that needs an aux power connector, that's extra power that needs to come from some place.


I do have the power connecter being supplied by the two unused SATI power leads through some conversion wiring. And previous experience with another system when I either forgot and didn't quite get it plugged in right, the Bios stopped the boot and said, please plug in the video card external power :)

So, on or about August 11th when the 300 watt ps arrives I will perform the PS upgrade and try the 2 monitors again.

My temporary fall back is to a known working gtx 750Ti two monitor solution. If I can't get 2 monitors on the Gtx 1060 to work reliably I will start mulling over if I want to go the 1050 TI route.

Thank you for your discussion. I need all the information I can gather.

Tom
A proud member of the OFA (Old Farts Association).
ID: 1882059 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3720
Credit: 9,385,827
RAC: 0
Bulgaria
Message 1882079 - Posted: 4 Aug 2017, 16:36:31 UTC - in response to Message 1879822.  


<app_config>
 <app>
  <name>setiathome_v8</name>
  <gpu_versions>
  <gpu_usage>1.00</gpu_usage>
  <cpu_usage>1.00</cpu_usage>
  </gpu_versions>
 </app>
</app_config>

If those are your settings, it's doing exactly as you've told it to do.
Save 1 CPU core for each GPU WU that is running.

But I have 7 cpu tasks running and a gpu task running using 1 core. 7+1=8 (normal)
Under CSG/Rosetta I have 8 cpu tasks running and a gpu task running using 1 core. 8+1=9 tasks.

Is this "gpu task running" always a setiathome_v8 GPU task?

If so there may be some rounding bug in BOINC
(or (unlikely) Rosetta@home server sets that each CPU task will need 0.85 CPUs)

Try this:
<app_config>
 <app>
  <name>setiathome_v8</name>
  <gpu_versions>
  <gpu_usage>1.00</gpu_usage>
  <cpu_usage>1.10</cpu_usage>
  </gpu_versions>
 </app>
</app_config>

If this works - try for test <cpu_usage>1.01</cpu_usage> or even <cpu_usage>1.000001</cpu_usage>
If it doesn't work - try for test <cpu_usage>1.99</cpu_usage>
(After every change restart the GPU task, e.g. by Snooze GPU)


I did look at the task manager and it still says 8 cpus.

How many CPU processes started by BOINC do you see? (for this Process Explorer is better - shows them as child processes of boinc.exe)
 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1882079 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22160
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1882093 - Posted: 4 Aug 2017, 17:34:04 UTC

When running SETI there is no point is setting GPU usage above 1.0 - the current batch of SETI applications do not have the ability to use more than 1 cpu core/thread.

<cpu_usage>1.00</cpu_usage> is best interpreted as "Try to use 1.0 cpu cores/thread", it does NOT mean "Use exactly 1.00 cpu core/thread".
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1882093 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3720
Credit: 9,385,827
RAC: 0
Bulgaria
Message 1882111 - Posted: 4 Aug 2017, 20:00:50 UTC - in response to Message 1882093.  

When running SETI there is no point is setting GPU usage above 1.0

Where did I say "setting GPU usage above 1.0"?


- the current batch of SETI applications do not have the ability to use more than 1 cpu core/thread.

So? Why this should matter?


<cpu_usage>1.00</cpu_usage> is best interpreted as "Try to use 1.0 cpu cores/thread", it does NOT mean "Use exactly 1.00 cpu core/thread".

This is not telling the app anything
(app don't know of the existence of the <cpu_usage>1.00</cpu_usage> tag,
app don't read app_config.xml nor BOINC is limiting or forcing the app to use that amount.
So nothing is telling the app "Try to use 1.0 cpu cores/thread" nor "Use exactly 1.00 cpu core/thread"
)

This have nothing to do with "the ability to use more than 1 cpu core/thread"

app_config.xml is only interpreted by BOINC
<cpu_usage>x.xx</cpu_usage> is telling BOINC (and not the app) to remove that # of CPUs from the pool available for CPU tasks (and nothing more)


(Without reading the BOINC source) this will be something like:
CPUpool = RoundUp ( CPUpool - cpu_usage * RunningGPUtasksNum )

If the behaviour described by Tom Miller is true I suspect there may be some (rounding?) bug in BOINC so I suggest to test it.
You can set <cpu_usage>3.00</cpu_usage> and BOINC should reduce the # of running CPU tasks by 3
This does not mean that the app or anything will use/load the "free cores"

You know that if <cpu_usage>0.99</cpu_usage> BOINC will not "reserve a core"
So I suspect that converting from decimal to binary may lead to a variable be set to e.g. <cpu_usage>0.999999999999999</cpu_usage> instead of 1.00000000000000000
 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1882111 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22160
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1882115 - Posted: 4 Aug 2017, 20:22:39 UTC - in response to Message 1882111.  

When running SETI there is no point is setting GPU usage above 1.0

Where did I say "setting GPU usage above 1.0"?[/quote]


In your previous post!!!
If so there may be some rounding bug in BOINC
(or (unlikely) Rosetta@home server sets that each CPU task will need 0.85 CPUs)

Try this:

<app_config>
<app>
<name>setiathome_v8</name>
<gpu_versions>
<gpu_usage>1.00</gpu_usage>
<cpu_usage>1.10</cpu_usage>
</gpu_versions>
</app>
</app_config>


If this works - try for test <cpu_usage>1.01</cpu_usage> or even <cpu_usage>1.000001</cpu_usage>
If it doesn't work - try for test <cpu_usage>1.99</cpu_usage>
(After every change restart the GPU task, e.g. by Snooze GPU)

Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1882115 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22160
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1882121 - Posted: 4 Aug 2017, 20:47:49 UTC

I see what you are try to say, but it took a lot of detangling.
The "cpu_usage" tag is not the best place to do the setting aside of CPU cores/threads for the application. It does NOT reserve a CPU thread/core, although it may appear to do so under some circumstances. As I explained earlier, it is not treated as a "reserve (fraction of) a core for this activity", but "use this number as a target", one which may be exceeded quite readily by the application. For example - on my Windows computer I have four SoG tasks running, with cpu_usage=0.1; according to your theory, that would only set 0.4 core/thread aside (probably rounded up to 1) for the four tasks, however each task is actually using approximately one complete core, and four cores/threads are used.
The best place to "reserve" CPU cores/threads is in the BOINC manager (advanced view/options/computing preferences - "computing" tab set the "use at most xx% of the CPUs" to give you the number of cores/threads you want to reserve for GPU tasks (and other non-BOINC operations). This works every time, provided you get your sums right, and take note of the rounding errors.

Tom is trying to convince BOINC to run a fixed number of SETI & Rosetta GPU tasks at the same time, and has found that due to the way BOINC interprets controls the "resource share" it is all but impossible to do in the long term - one can get it working for a time, but if anything disturbs a server then all bets are off.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1882121 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3720
Credit: 9,385,827
RAC: 0
Bulgaria
Message 1882161 - Posted: 4 Aug 2017, 23:52:04 UTC - in response to Message 1882121.  
Last modified: 5 Aug 2017, 0:27:03 UTC

Where did I say "setting GPU usage above 1.0"?

In your previous post!!!
<cpu_usage>1.10</cpu_usage>

So you bolded <cpu_usage> to show me that I say "setting GPU usage above 1.0"? ;)



For example - on my Windows computer I have four SoG tasks running, with cpu_usage=0.1; according to your theory, that would only set 0.4 core/thread aside (probably rounded up to 1) for the four tasks, however each task is actually using approximately one complete core, and four cores/threads are used.

"According to my theory" cpu_usage=0.1 will Not "set 0.4 core/thread aside" (nor 1)
BOINC will not change the number of started (running) CPU apps/tasks (which is all the "cpu_usage" tag is used for)

CPUpool = RoundUp ( CPUpool - cpu_usage * RunningGPUtasksNum )
CPUpool = RoundUp ( 8 - 0.1 * 4 )
CPUpool = RoundUp ( 7.6 )
CPUpool = 8
= no change


... and four cores/threads are used

BOINC do not monitor or control CPU load in any way.
BOINC do not know what will be (or what is in fact) the real CPU load.
Setting something in app_config.xml it is up to the user if near-to-reality values are typed or if the user want to lie to BOINC (which BOINC will happily obey)

For the test of possible bug I suggested to purposefully lie to BOINC to see if this will workaround the bug.

On stock apps the same info is supplied by the server.


The best place to "reserve" CPU cores/threads is in the BOINC manager ... "use at most xx% of the CPUs"

"cpu_usage" tag in app_config.xml have absolutely the same purpose.
Think of it as [automatically change "use at most xx% of the CPUs"] when GPU tasks are running.
 
 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1882161 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22160
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1882206 - Posted: 5 Aug 2017, 6:04:23 UTC

So you bolded <cpu_usage> to show me that I say "setting GPU usage above 1.0"? ;)

Sorry, I was playing with a speech to type thing - and it got my accent confused. Back to manual typing.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1882206 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22160
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1882208 - Posted: 5 Aug 2017, 6:19:43 UTC

The truth is that the cpu_usage tag is all but ignored by most , if not all versions of BOINC. I say this because I've NEVER seen the value set being "obeyed". If one sets cpu_usage to say 0.1, and then run 4 GPU tasks one would reasonably expect to see one CPU thread/core used by all four tasks, but what one actually see is four CPU threads/cores used, a tiny bit by each (but SoG demands the best part of a core, so is a very poor example to try this sort of thing with).
The observation does not support your theory.

I suspect that we actually seeing a number of bugs around the use of CPU cores/threads when the main processing is on a GPU. Rounding is a side show in what is actually a poorly written section of code within BOINC.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1882208 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3720
Credit: 9,385,827
RAC: 0
Bulgaria
Message 1882213 - Posted: 5 Aug 2017, 7:10:41 UTC - in response to Message 1882208.  

I've NEVER seen the value set being "obeyed".

The value is obeyed by BOINC, to run less CPU tasks and nothing more.
Else how everybody see it works to "free a core" (as they say)

Why don't you just try <cpu_usage>1.5 and you should see that the running CPU tasks is reduced by:
4*1.5 = 6 (i.e. if you just run 8 CPU tasks now only 2 will run)

- just try to not exceed the available to BOINC CPUs in which case some of the GPU tasks will be stopped by BOINC,
e.g. 4*3.0 = 12 so on 8 "cores" BOINC will probably start only 2 GPU tasks (for each of which BOINC will think 3 cores are needed)

Other calculations:
4*1.49 = 5.96 = 5 less
4*0.99 = 3.96 = 3 less
4*0.75 = 3 less
4*0.50 = 2 less
4*0.24 = 0.96 = 0 less (i.e. the # of running CPU tasks will not be reduced)


If one sets cpu_usage to say 0.1, and then run 4 GPU tasks one would reasonably expect to see one CPU thread/core used by all four tasks, but what one actually see is four CPU threads/cores used

You may "expect to see one CPU thread/core used" but the apps don't react on this value, only BOINC does.
Rename <cpu_usage> to <if-this-gpu-app-is-running-reduce-the-number-of-running-cpu-apps-by> to understand what it tells to BOINC (and do not tell anything to apps)


The observation does not support your theory.

Ask someone that knows BOINC well and whom you trust to check and comment on my posts here.
 
 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1882213 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22160
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1882219 - Posted: 5 Aug 2017, 8:48:25 UTC

So what's the point of setting the cpu_usage value then - that surely is the bug, not the rounding?

The user setting that value would expect a GPU task to use only the entered fraction of a CPU, however, having spent the last few weeks crawling over BOINC source code, I can't see the point entering it, as it is only used within BOINC and NEVER passed to the client app, thus the client GPU app assumes it can use whatever amount of CPU it needs to.
I did try your suggestion some time ago, and it is unreliable in its outcome, sometimes CPU tasks are reduced, other times GPU tasks are reduced (which according to your theory is the "correct" outcome), other times a sort of competition will break out and not a lot of work gets done as both CPU and GPU tasks attempt to share cores.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1882219 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1884136 - Posted: 16 Aug 2017, 6:28:28 UTC

I finally got my 300 watt PS for the Dell and upgraded it. The Kil-a-watt meter now registers 230 watts more frequently.

The GPU is still not being willing/able to drive 2 monitors. As soon as I unplugged the "2nd monitor" (while the system was running) the first monitor lit up. I suspect this could still be a PS problem because previous GPU's that didn't need a PS could drive 2 monitors.

I am currently driving the gpu (gtx 1060 3GB Mini) off two different SATI leads that run also run the DVD drive and the HDD. So the question is still out. The gpu itself is crunching seti wu/tasks steadily at a robust 15,000+ RAC level.

Tom
A proud member of the OFA (Old Farts Association).
ID: 1884136 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1884432 - Posted: 17 Aug 2017, 11:46:06 UTC

The RAC just went "flat" (instead of upwards) at 16,976.85 (the graph basically says about 17,000).

If it stays that way for a couple of days (or wobbles around that result) I am going to claim that for the Lunatics SOG app (whichever one I happen to be running off the beta6 installer) a Gtx 1060 3GB video card is "worth" about 17,000 credits for the RAC.

Tom
A proud member of the OFA (Old Farts Association).
ID: 1884432 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1884552 - Posted: 17 Aug 2017, 20:12:05 UTC

The RAC just went "flat" (instead of upwards) at 16,976.85 (the graph basically says about 17,000).

If it stays that way for a couple of days (or wobbles around that result) I am going to claim that for the Lunatics SOG app (whichever one I happen to be running off the beta6 installer) a Gtx 1060 3GB video card is "worth" about 17,000 credits for the RAC.

Tom

You have a bit to go with that rig yet Tom as my 3GB 1060's are good for 21K each and for that rig of yours I'd expect a RAC around 23K using all it resources.

Cheers.
ID: 1884552 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1885531 - Posted: 23 Aug 2017, 6:54:02 UTC - in response to Message 1884552.  

You have a bit to go with that rig yet Tom as my 3GB 1060's are good for 21K each and for that rig of yours I'd expect a RAC around 23K using all it resources.

Cheers.


It looks like it peaked at someplace near 18,200 RAC before the Tuesday black out window hit. I am slowly, as they run out, removing the "other" projects. Once I get down to the WCG and throttle it to 2-3 cores I will be turning on the cpu tasks for Seti to see how much further up I can get it. I want to keep a "small" thread of WCG task running but it doesn't do "small" very well so I am just going to have to let them start running out of time until the scheduler figures out to not download so many.

Tom
A proud member of the OFA (Old Farts Association).
ID: 1885531 · Report as offensive
Previous · 1 · 2 · 3

Message boards : Number crunching : Reports on experiments with a Dell Optiplex 7010 Microtower


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.