Will this slow Boinc down?

Message boards : Number crunching : Will this slow Boinc down?
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile MattDavis
Volunteer tester
Avatar

Send message
Joined: 11 Nov 99
Posts: 919
Credit: 934,161
RAC: 0
United States
Message 83418 - Posted: 3 Mar 2005, 23:21:48 UTC

I have LHC running on an ooooold computer. I know leaving the graphics on slows down crunching time. Will leaving Boinc open on the work tab slow crunching down too?
-----
ID: 83418 · Report as offensive
Profile Thierry Van Driessche
Volunteer tester
Avatar

Send message
Joined: 20 Aug 02
Posts: 3083
Credit: 150,096
RAC: 0
Belgium
Message 83419 - Posted: 3 Mar 2005, 23:30:11 UTC
Last modified: 3 Mar 2005, 23:31:38 UTC

There has been some experience done during the Beta test phaze.

The smallest amount of used CPU is achieved when leaving the GUI with the tab "Projets" open, all other tabs are asking a little bit more of the CPU, especially the tab "Disk".

Read the News at the homepage
Have a look at the Technical News
Look if your question is not answered at the Boinc Wiki

Best greetings, Thierry
ID: 83419 · Report as offensive
Profile Prognatus

Send message
Joined: 6 Jul 99
Posts: 1600
Credit: 391,546
RAC: 0
Norway
Message 83426 - Posted: 4 Mar 2005, 0:00:20 UTC
Last modified: 4 Mar 2005, 0:02:26 UTC

> The smallest amount of used CPU is achieved when leaving the GUI with the tab "Projets" open

I'm a little confused... Does this apply when the BoincGui runs in the background to (closed, and not visible)? In other words, one should select the Project tab before _closing_ the GUI app?

ID: 83426 · Report as offensive
Profile Bruno G. Olsen & ESEA @ greenholt
Volunteer tester
Avatar

Send message
Joined: 15 May 99
Posts: 875
Credit: 4,386,984
RAC: 0
Denmark
Message 83434 - Posted: 4 Mar 2005, 0:21:59 UTC - in response to Message 83426.  

> > The smallest amount of used CPU is achieved when leaving the GUI with the
> tab "Projets" open
>
> I'm a little confused... Does this apply when the BoincGui runs in the
> background to (closed, and not visible)? In other words, one should select
> the Project tab before _closing_ the GUI app?

In earlier versions it did apply - don't know about 4.19+ - so even when my boincgui is in the system tray, I always make sure that it is on any other tab than the work tab - mostly the messages tab. That's because the boinc gui still keeps updating (or did) the graphics even when not needed.


ID: 83434 · Report as offensive
Profile Paul D. Buck
Volunteer tester

Send message
Joined: 19 Jul 00
Posts: 3898
Credit: 1,158,042
RAC: 0
United States
Message 83483 - Posted: 4 Mar 2005, 19:23:57 UTC - in response to Message 83434.  

> > > The smallest amount of used CPU is achieved when leaving the GUI
> with the
> > tab "Projets" open
> >
> > I'm a little confused... Does this apply when the BoincGui runs in the
> > background to (closed, and not visible)? In other words, one should
> select
> > the Project tab before _closing_ the GUI app?
>
> In earlier versions it did apply - don't know about 4.19+ - so even when my
> boincgui is in the system tray, I always make sure that it is on any other tab
> than the work tab - mostly the messages tab. That's because the boinc gui
> still keeps updating (or did) the graphics even when not needed.

4.19 still does, in theory, Projects would be the best tab to use as it is static but even if you select that, it will be back on messages before you know it ...
ID: 83483 · Report as offensive
Profile Prognatus

Send message
Joined: 6 Jul 99
Posts: 1600
Credit: 391,546
RAC: 0
Norway
Message 83738 - Posted: 8 Mar 2005, 2:12:09 UTC

> Projects would be the best tab to use as it is static but even if you select that, it will be back on messages before you know it ...

Yes, and if that tab slows down crunching, perhaps there should be an user selectable option in BoincGui to stay on the Project tab - or better: not to update messages and other data in BoincGui until the user restores its view.

Besides, I've always meant that it should be an user option to adjust CPU priority for the command client who's doing the actual crunching. When I let it run during the night, I set priority to "Above normal" or "High", but it only works for the task that's currently running. The next WU gets another task, and then priority is reset to default. (which we cannot change...)

ID: 83738 · Report as offensive
Profile Paul D. Buck
Volunteer tester

Send message
Joined: 19 Jul 00
Posts: 3898
Credit: 1,158,042
RAC: 0
United States
Message 84095 - Posted: 8 Mar 2005, 21:24:58 UTC - in response to Message 83738.  

> Besides, I've always meant that it should be an user option to adjust CPU
> priority for the command client who's doing the actual crunching. When I let
> it run during the night, I set priority to "Above normal" or "High", but it
> only works for the task that's currently running. The next WU gets another
> task, and then priority is reset to default. (which we cannot change...)

The whole point to demand based priorites for computer process scheduling has been with us from almost the first true operating systems. What messes up the nice neat way that it is supposed to work are the silly users that change it to make it better.

Fundamentally, over the long run, letting BOINC run at "idle" priority will complete just as much work as it will if you inflate the priority.

Honest, other than making you feel good, there will be little effect in the total processing time it takes. The truth is, if you want faster, you need to buy a faster processor ... :)

Tell your spouse that *I* said so ... for my own, I have $700 in the kitty for the new G5 when it is released, and I will probably have to float a loan from the "Bank of Nancy Buck" to buy it when it does come out ... :)

ID: 84095 · Report as offensive
Profile Prognatus

Send message
Joined: 6 Jul 99
Posts: 1600
Credit: 391,546
RAC: 0
Norway
Message 84122 - Posted: 8 Mar 2005, 22:00:12 UTC
Last modified: 8 Mar 2005, 22:02:49 UTC

> Honest, other than making you feel good, there will be little effect in the total processing time it takes.

Maybe you're right. But I see a big difference in the time slice the client gets when I increase the priority at night. From about 85-90% to 92-97% I have about 30 other tasks in the System Tray at the same time, among them several CPU hungry apps (like Ad-Aware's Ad-Watch, WinPatrol, NAV, etc.) which I'm not willing to close down while my Internet line is open.

However, I haven't bothered to start a project of logging the difference - if any - in result gain. :) So, it may very well be so that the end result isn't very different in the long run.

> The truth is, if you want faster, you need to buy a faster processor ... :)

Yeah, I want to... but have no money for that just now... :)
In the mean time, I'll add a 2nd machine shortly.

ID: 84122 · Report as offensive
Profile Paul D. Buck
Volunteer tester

Send message
Joined: 19 Jul 00
Posts: 3898
Credit: 1,158,042
RAC: 0
United States
Message 84205 - Posted: 9 Mar 2005, 1:15:34 UTC - in response to Message 84122.  

> Yeah, I want to... but have no money for that just now... :)
> In the mean time, I'll add a 2nd machine shortly.

Well, I want to add one also ... but, the last update for the G5 was just about a year ago and I would hate to buy a 2.5 and see that they came out with a 6.0 two months later ...

So, I am saving my allowance and hoping by June they will have announced, if they are not already shipping the next G5 ...

But, if I am REAL careful I may actually be able to add a second machine by the end of the year or first part of next year. I priced it out and I might be able to get a "bare-bones" dual-Xeon in thee 3.0 to 3.2 range ... so, effectively I may be able to add 6 new CPUs to my processing farm with only two new boxes.

I just put a new motherboard into one of my machines that was slower than a similar box (biggest difference was the MB not being dual channel, AND, the cheaper memory is 3, 3, 3, 8 vice 2.5, 3, 3, 8) ...

So, if the processing speed does not pick up that much I may go a get some faster memory for that computer too ...
ID: 84205 · Report as offensive
Profile Borgholio
Avatar

Send message
Joined: 2 Aug 99
Posts: 654
Credit: 18,623,738
RAC: 45
United States
Message 84221 - Posted: 9 Mar 2005, 1:43:59 UTC - in response to Message 84205.  

> > Yeah, I want to... but have no money for that just now... :)
> > In the mean time, I'll add a 2nd machine shortly.
>
> Well, I want to add one also ... but, the last update for the G5 was just
> about a year ago and I would hate to buy a 2.5 and see that they came out with
> a 6.0 two months later ...
>
> So, I am saving my allowance and hoping by June they will have announced, if
> they are not already shipping the next G5 ...
>
> But, if I am REAL careful I may actually be able to add a second machine by
> the end of the year or first part of next year. I priced it out and I might
> be able to get a "bare-bones" dual-Xeon in thee 3.0 to 3.2 range ... so,
> effectively I may be able to add 6 new CPUs to my processing farm with only
> two new boxes.

Dual-Xeon? That's 4 virtual processors! I HATE you. :)
You will be assimilated...bunghole!

ID: 84221 · Report as offensive
Profile Paul D. Buck
Volunteer tester

Send message
Joined: 19 Jul 00
Posts: 3898
Credit: 1,158,042
RAC: 0
United States
Message 84241 - Posted: 9 Mar 2005, 2:21:51 UTC - in response to Message 84221.  

> Dual-Xeon? That's 4 virtual processors! I HATE you. :)

Well, stad in-line! Most people hate me, so why should you be different?

:)

My whole life I thought it was funny, the more people associated with me the more they liked me, but I make a really bad first impression in person. Autism is saying what is on your mind ... :)

But, yes, that is what I meant, a new dual-G5 of greater than 2.5 GHz and a dual Xeon ... note that I will NOT be buying a tip-top Xeon, more of the mid to lower end, but, yes, I expect that I might up-engine it later when the next pin-out comes out and the current pin-out is phased out ... or maybe even every 3-4 years ...

Since it would just be a "compute box" for BOINC, the rest of it can be nothing special, and even if it is NOT the fastest, as you pointed out, 4 locical CPUs...

Which with a Dual G5 give me 6 new CPUs, though will you like me less if the G5 (or a rumored G6) would have dual cores; making it a dual CPU plus dual core, making it 4 physical CPUs in the box ...

And if the Altevic (sp?) optimized clients make it out of the labs, now, won't that be fine also ...

ID: 84241 · Report as offensive
Profile Prognatus

Send message
Joined: 6 Jul 99
Posts: 1600
Credit: 391,546
RAC: 0
Norway
Message 84327 - Posted: 9 Mar 2005, 9:55:43 UTC

> But, yes, that is what I meant, a new dual-G5 of greater than 2.5 GHz and a dual Xeon

WOW... I'm glad I'm not in your team! LOL ;)

The machine I'm adding is just a sissy compared to this. I'm setting up a media computer in my living room, built from spare parts lying around. So, it's just an AMD 2000+ CPU in it. :)

> Which with a Dual G5 give me 6 new CPUs, though will you like me less if the G5 (or a rumored G6) would have dual cores; making it a dual CPU plus dual core, making it 4 physical CPUs in the box ...

Intel Xeon is also taking that route, right? And AMD Opteron also, I think. So in a few years S@H will have a lot more crunching power around the world! :)

ID: 84327 · Report as offensive
Profile Bruno G. Olsen & ESEA @ greenholt
Volunteer tester
Avatar

Send message
Joined: 15 May 99
Posts: 875
Credit: 4,386,984
RAC: 0
Denmark
Message 84350 - Posted: 9 Mar 2005, 11:11:48 UTC - in response to Message 83483.  


> 4.19 still does, in theory, Projects would be the best tab to use as it is
> static but even if you select that, it will be back on messages before you
> know it ...

That's exactly why I chose the messages tab when boinc's hidden ;)

> > Dual-Xeon? That's 4 virtual processors! I HATE you. :)
>
>Well, stad in-line! Most people hate me, so why should you be different?
>
>:)

I don't hate you - I'm just envious as h*ll :D (a couple of those would do wonders for my team *lol*)


ID: 84350 · Report as offensive
Profile Paul D. Buck
Volunteer tester

Send message
Joined: 19 Jul 00
Posts: 3898
Credit: 1,158,042
RAC: 0
United States
Message 84439 - Posted: 9 Mar 2005, 18:41:09 UTC - in response to Message 84327.  

Bjorn, Bruno,


> WOW... I'm glad I'm not in your team! LOL ;)

Why? I take a shower at least once a year whether or not I need one ...

> The machine I'm adding is just a sissy compared to this. I'm setting up a
> media computer in my living room, built from spare parts lying around. So,
> it's just an AMD 2000+ CPU in it. :)

Well, everyone needs a hobby, and mine is computers. Oh, and being disliked ... I might as well go with my strengths ...

> > Which with a Dual G5 give me 6 new CPUs, though will you like me less if
> the G5 (or a rumored G6) would have dual cores; making it a dual CPU plus dual
> core, making it 4 physical CPUs in the box ...

Which I really would like, but don't expect ... knock on wood, don't throw me into that briar patch ...

> Intel Xeon is also taking that route, right? And AMD Opteron also, I think.
> So in a few years S@H will have a lot more crunching power around the world!
> :)

Every CPU maker is going that way. The speed of light is a constant (as far as we know, though there is a proposed particle that can't go slower than the TSOL, but has not of course been seen ...) and this is making for a fundamental limitation on the ability to get speed. One of the by-products of the quest for speed is the odd effect that the type of circuts used in chips consumes most power when changing state. So, the more changes per unit time, the more heat per unit time.

So, with TSOL an upper limit, increasing the speed of the clock makes the "distance" between the leading and falling edges of a clock signal "closer". This means that the distance across the surface of the chip introduces clock "skew" to the point that it does not work.

SO

The only alternatives left to improving performance are:

a) increase parallelism by adding more procesisng units

b) making the processing more efficient by using extra resources to "specutively" execute both sides of a decision and only complete the chosen path when you get there.

c) increase the number of instructions "in flight" at any given moment.

d) execute instructions "out or order" to use available resources most efficiently and then commit them "in order" when they are done.

e) add more an more specialized processing units to improve efficieny (look at Apple/IBM's Altevec processor which can significantly improve vector performance).

f) improve the effiency of the micro-engine that is running within the processor core (note, NO processor actually processes the actual x86 instruction set natively anymore, way to inefficient, they all convert the x86 instructions into an internal form and execute that ... way faster.

g) add optimizations to the external x86 instructions to remove context switches and the like to in effect "run time" optmize the code (look at the Transmeta processor and the HP "Dynamo" project - which got improvements of up to 20% even with the added overhead, an odd effect was a processor running the Dynamo software; which, in effect, emulates the processor chip in sofware; actually ran the programs faster than running the program in native mode on the same machine!).

In a quaint expression, we are "throwing hardware" at our performance problems. One of the interesting marketing things going on is that both AMD and Intel are making serious efforts to disconnect people's minds from clock speeds.

Why? You might ask ...

Well, clock speed is not a true indicator of performance, never was, never will be ...

An AMD chip running at 1.4 GHz could equal performance of 1.8 GHz Intel processors. My 2.0 GHz G5 (actually a IBM G4 chip, but reenamed by Apple to make things easier to keep track of) out runs my 3.2 GHz intels in many places, and not just because it is two physical processors ...

So, don't get all worked up if you did not follow all I wrote, there are probably things I missed anyway ... like the Itanium approach ... and others ...

And NO making a CPU 64-bit does not imply faster performance. It implies better performance which is not the same thing, and this is only for certain specific tasks ...
ID: 84439 · Report as offensive
Stanislav Sokolov
Volunteer tester
Avatar

Send message
Joined: 8 Apr 02
Posts: 26
Credit: 380,456
RAC: 0
Norway
Message 85038 - Posted: 11 Mar 2005, 11:49:21 UTC

This seemed like an approprate thread to post this observation, especially since 4.25 is so big that it takes 10 minutes to load everything befor the reply box shows up [off-topic]To the board devs.: please make the reply box always show on to, independent of the current sorting order.[/off-topic]

I have Boinc CLI installed as service, with no manager running. In this scenario boinc process uses between 3% and 6% of CPU time. Now if I start BoincView and it proceeds to communicate with the service over GUI RPC, boinc services COU share shoots up to oscillate between 7 and 10%. Is it really necessary to use som many cylcle on a control application, which could be more effectively used for crunching? Did developers use polling in the implementation of the RPC interface?

Further development: On the 4.24 thread I read about changing the log-in status of boinc service to system-wide with desktop access (in order to enable graphics). I had to try it :) and an interesting side effect of this change was a dramatic decrease in the service's resource share. Now boinc process uses between 0% and 1% of CPU time according to task manager.

For the curious, in both cases BoincView used between 0% and 1% of CPU time - 1% every 5 seconds, which is the set interval for updating of the views.




---
<img border="0" src="http://boinc.mundayweb.com/one/stats.php?userID=875&amp;trans=off" />
Everything is just <A HREF="http://heim.ifi.uio.no/~stanisls/fysisk/">A Question of Physics</A>
ID: 85038 · Report as offensive
Profile Prognatus

Send message
Joined: 6 Jul 99
Posts: 1600
Credit: 391,546
RAC: 0
Norway
Message 85128 - Posted: 11 Mar 2005, 20:50:39 UTC
Last modified: 11 Mar 2005, 21:12:44 UTC

Paul:
> Why? I take a shower at least once a year whether or not I need one ...

I take a shower at least once a day wether or not I want one ... :)

Stanislav:
> This seemed like an approprate thread to post this observation, especially since 4.25 is so big that it takes 10 minutes to load everything befor the reply box shows up

You must get yourself broadband, Stanislav... :)
But I sympathize with you on this, it's no fun with slow connect speeds.

> [off-topic]To the board devs.: please make the reply box always show on to, independent of the current sorting order.[/off-topic]

Yes, absolutely! I support this.

> I read about changing the log-in status of boinc service to system-wide with desktop access (in order to enable graphics). I had to try it :) and an interesting side effect of this change was a dramatic decrease in the service's resource share. Now boinc process uses between 0% and 1% of CPU time according to task manager.

What does this means - for those of us who haven't tried the 4.25 version yet and can't see this in front of us. Feel free to create a new topic/thread named "Optimal install and use of 4.25" or something like that... :)

ID: 85128 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13722
Credit: 208,696,464
RAC: 304
Australia
Message 85156 - Posted: 11 Mar 2005, 22:49:49 UTC - in response to Message 85128.  


> You must get yourself broadband, Stanislav... :)

Unfortunately only very small sections of the planet have readyly available internet access. Of those parts, only a very small section have broadband available.
Grant
Darwin NT
ID: 85156 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 85171 - Posted: 11 Mar 2005, 23:50:12 UTC - in response to Message 85156.  

>
> > You must get yourself broadband, Stanislav... :)
>
> Unfortunately only very small sections of the planet have readyly available
> internet access. Of those parts, only a very small section have broadband
> available.

Actually, if only very small sections of the planet had broadband, my E-Commerce customers would see much less fraud.

The credit cards belong to people in the U.S. but the orders trace to some pretty obscure places.
ID: 85171 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13722
Credit: 208,696,464
RAC: 304
Australia
Message 85322 - Posted: 12 Mar 2005, 10:05:13 UTC - in response to Message 85171.  


> Actually, if only very small sections of the planet had broadband, my
> E-Commerce customers would see much less fraud.
>
> The credit cards belong to people in the U.S. but the orders trace to some
> pretty obscure places.

More like if there were more broadband, you'd see even more fraud.
Grant
Darwin NT
ID: 85322 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 85408 - Posted: 12 Mar 2005, 16:40:59 UTC - in response to Message 85322.  

>
> More like if there were more broadband, you'd see even more fraud.
>

The fraud we do see is actually entered by actual thieves sitting at keyboards, so I'm not sure that they need a lot of bandwidth. Dialup would do.

.... but my favorite example: two orders, 8 minutes apart, opposite sides of the U.S. same phone number (same area code), and same credit card number.

The orders both came from the same Class-C block, owned by a satellite internet provider who specializes in high speed satellite access to African nations. Their home office is in Europe, and they have an office in Nigeria.

Certainly there are countries that don't have cheap DSL or Cable, but I doubt that there is a single spot on the planet where you can't get __something__.
ID: 85408 · Report as offensive

Message boards : Number crunching : Will this slow Boinc down?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.