Really not good!

Message boards : Number crunching : Really not good!
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · Next

AuthorMessage
Ingleside
Volunteer developer

Send message
Joined: 4 Feb 03
Posts: 1546
Credit: 15,832,022
RAC: 13
Norway
Message 353315 - Posted: 30 Jun 2006, 21:07:24 UTC - in response to Message 353247.  

Also, using the enhanced results returned up to this point, I'm sure we could figure out a way to adjust credit ratios per result according to machine type. It all boils down to how much added load for the servers this would create. I wouldn't hold my breath, really.


Maybe mis-understands you here, but are you suggesting to give different credit based on what cpu-type an user have? If so, this won't work for various reasons.

Also, you're overlooking one other problem, SETI@home isn't only dependent on cpu-type & speed, but can also be heavily influenced by cache-size and memory-speed, and even different mainboards can influence crunching. For multi/HT-computers, there can be a large difference between running example 2 seti-instances, and running 1 seti & 1 Einstein and so on. Even on single-core, single-cpu, you'll get some variation depending on whatever other processes is running alongside.

If you takes a look on BOINC Alpha, back when was using SETI@home as the test-application, you can find multiple examples there re-running the exact same seti-wu on the exact same computer has given over 30% variation in the reported cpu-time...
ID: 353315 · Report as offensive
Profile KWSN - Chicken of Angnor
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 9 Jul 99
Posts: 1199
Credit: 6,615,780
RAC: 0
Austria
Message 353321 - Posted: 30 Jun 2006, 21:22:27 UTC - in response to Message 353314.  
Last modified: 30 Jun 2006, 21:26:32 UTC

Simon

I would Love to See Your Data al.setiboinc (at) gmail.com

Pappa

I'll see what I can put together for you :o)

And Ingleside - no, I'm not suggesting that's a good idea...just outlining it could be done from times taken for submitted enhanced results so far, taking into account cache size and all. A more sophisticated CPU/capability detection routine would have to be coded, but I think that's in the works already.

I'm aware of the variation due to memory bandwidth, cache size etc.

Bottom line is: you can never please everybody. As fair as you try to be, someone will feel they have an unfair dis/advantage.

Regards,
Simon.
Donate to SETI@Home via PayPal!

Optimized SETI@Home apps + Information
ID: 353321 · Report as offensive
Profile [B^S] Dora
Avatar

Send message
Joined: 18 Feb 01
Posts: 38
Credit: 20,149
RAC: 0
United States
Message 353471 - Posted: 1 Jul 2006, 2:48:50 UTC

As you can see, there is no advantage to having a slower machine....

Mine is the second result, and although it took forever, there is only a .01 difference in credit.

Just FYI....

Keep smilin' and keep crunchin'

Dora


346679557 2021094 25 Jun 2006 18:27:32 UTC 26 Jun 2006 2:57:05 UTC Over Success Done 19,315.02 64.52 64.52
346679558 2126086 25 Jun 2006 18:27:46 UTC 1 Jul 2006 2:18:52 UTC Over Success Done 253,401.00 64.53 64.52
346679559 2242236 25 Jun 2006 18:27:31 UTC 21 Jul 2006 11:22:38 UTC In Progress Unknown New --- --- ---
346679560 2083693 25 Jun 2006 18:27:27 UTC 26 Jun 2006 3:50:38 UTC Over Success Done 25,078.84 64.52 64.52

ID: 353471 · Report as offensive
Profile Clyde C. Phillips, III

Send message
Joined: 2 Aug 00
Posts: 1851
Credit: 5,955,047
RAC: 0
United States
Message 353884 - Posted: 1 Jul 2006, 19:19:01 UTC - in response to Message 353217.  

Clyde, what you said is not 100% correct.

Crunch3rs as well as my optimized clients also take longer to process the VLAR WUs (usually you get 58.69 or 58.70 credits for them, Crunch3rs overclaims by a very small bit). However, this just doesn't show up as such a big difference - in terms of raw time taken - you need to figure out the ratio between VLAR WUs and "normal" credit WUs. You will see it very closely matches between optimized and default clients.

This has been the case with ALL WUs we have crunched, whether with enhanced or standard. There, WUs just took even less time than with optimized clients now, so the difference was hardly noticeable in raw seconds taken to process a result.

Hope that explains things a bit better. I have ample benchmark data to support what I said, if you want it.

Regards,
Simon.


Ok, Simon. I looked at two of the Pentium D computers using Crunch3r's cruncher listed on the #21-40 page (the elite now consist of only the Conroe, some Macs, some Opterons and some Pentiums with 16- and 32 threads). I saw in both cases that the 58-credit VLARs took them about 9,000 seconds and the 64-credit 0.4xx degree ones took them about 6,700 seconds for a 4-to-3 advantage in favor of the 0.4xx-angle-range units. In my case the ratio is about 8-to-4.5 (hours). So the ratio is fairly close to what you said, but the disadvantage of the default cruncher is appalling. (it's possible that overclocking and better memory may be helping the faster machines, too). Whatever, as soon as I can find a cruncher for Windows about as fast as Crunch3r's which has easy instructions for loading and operating I'll use it. Thanks for all the good work you're doing even though I fear the worst if I ever try to make my own. I wonder if when we receive units from the Alfa receiver all these crunchers will be obsoleted. Matt Lebofsky said two months at best, but that could be two years, based on all the problems encountered in the past.

ID: 353884 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 354144 - Posted: 2 Jul 2006, 4:00:50 UTC - in response to Message 352252.  

Again, time has nothing to do with it. It is flops. Some flops take longer than others. Why does no one seem to understand this? This gets brought up every few days.

If you sat down and did 200 additions in your head, then do 200 multiplications, then do 200 divisions. What will take longer? I am betting divisions. Same type of idea on what happens with SETI calculates, some take a lot longer than others, but still take 1 flop.


So people with slow machines that get lucky and don't get these long ones catch up to the fast machines that get a slew of them...Not a fair system...Everytime I almost get in the top 1000 machines I get stuck with these long ones and fall backwards...Guess I better hope for an alien...Or build a faster machine, Hey I like that idea.

If by "fair" you mean "equitable" then over time, everyone gets as many "fast" units and "slow" units, it is fair.

If faster computers do more work units, then faster computers will get more of the slow work units than slow computers.
ID: 354144 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 354146 - Posted: 2 Jul 2006, 4:02:49 UTC - in response to Message 352206.  


If you sat down and did 200 additions in your head, then do 200 multiplications, then do 200 divisions. What will take longer? I am betting divisions. Same type of idea on what happens with SETI calculates, some take a lot longer than others, but still take 1 flop.


Good analogy.
ID: 354146 · Report as offensive
Profile The Gas Giant
Volunteer tester
Avatar

Send message
Joined: 22 Nov 01
Posts: 1904
Credit: 2,646,654
RAC: 0
Australia
Message 354601 - Posted: 2 Jul 2006, 20:46:17 UTC - in response to Message 354144.  

Again, time has nothing to do with it. It is flops. Some flops take longer than others. Why does no one seem to understand this? This gets brought up every few days.

If you sat down and did 200 additions in your head, then do 200 multiplications, then do 200 divisions. What will take longer? I am betting divisions. Same type of idea on what happens with SETI calculates, some take a lot longer than others, but still take 1 flop.


So people with slow machines that get lucky and don't get these long ones catch up to the fast machines that get a slew of them...Not a fair system...Everytime I almost get in the top 1000 machines I get stuck with these long ones and fall backwards...Guess I better hope for an alien...Or build a faster machine, Hey I like that idea.

If by "fair" you mean "equitable" then over time, everyone gets as many "fast" units and "slow" units, it is fair.

If faster computers do more work units, then faster computers will get more of the slow work units than slow computers.

I am talking fair credit for an hr of cpu time across all wu's, across all projects and proportionate across platforms of varying specifications. When I see a project giving upto 3 times the "standard" credit then it immediately lends itself to abuse. Maybe FLOP counting is not the way after all.
ID: 354601 · Report as offensive
Profile Keith Jones

Send message
Joined: 25 Oct 00
Posts: 17
Credit: 5,266,279
RAC: 0
United Kingdom
Message 355163 - Posted: 3 Jul 2006, 12:30:37 UTC

Hiya,

I guess this has all been said by people far greater in intelligence than I but here are the ideas I haven't seen trashed as imbalanced......yet!

IMHO...

You're quite right in the idea that FLOPS aren't the best way of indicating effort put in and I agree totally with the idea of cpu time being the simplest solution but in a slightly different way.

Whenever you set a standard measure; some will not be able to achieve it; some will over achieve it. Fairness is trying to ensure that the standard measure lies equally between those groups of people.

...that was a bit philosophical but if you can bear with me !!!...

FLOPs are dependant on architecture of the CPU, efficiency of brand, and use of optimisations of extended instructions etc ie P3 vs P4, AMD vs Intel, MMX vs SSE2. It would be extremely hard to map every kind of processor fairly to get an average FLOP.

CPU time, however, is also influenced by CPU efficiency, extended instructions etc. It's not simple to just consider it is a measure of how many cycles your cpu has donated to the task. Some of those cycles can do a lot more with extended instructions etc. So again taking CPU time as a lump figure doesn't work fairly. More importantly it doesn't allow people to see optimisation at work. Hence it makes it hard for the devs to see real life feedback on the core client's effectiveness and therefore the bulk of work being done on their application.

The last(?) option is for the Seti guys to analyze the data, pick an arbitrary value for the average WU and give the same credit for each WU across the board. Sounds like going back to Seti Classic concepts and it has its own problems but they seem less of a 'fairness' issue.

The main issue is to prevent the obvious 'cherry picking'. I know people will cherry pick good WUs for a while but it's bound to show in the 'aborts' showing up for an account. There's already a crude penalty system in place to reduce WUs sent to an individual showing failures and aborts. It just needs to be watched....

The secondary issue is that it blocks the idea of throwing computations to appropriately powerful processors. Then again, that's only really useful for amazingly long single calculations or time critical applications.

This seems to be possible with the approach the team are adopting (ie by asking for un-optimised client results to be reported). This could allow them to produce the required averages....

It's basically a rock and a hard place decision.

I guess I don't really care either way but it would be nice if it could all be 'fair', still allow for the optimisers and not penalise those that don't have the facilities to contribute much. That to me says the last option.. to others it doesn't.... again... such is life ;-)

My tuppence,

Keith

@PoohBear - I hope you don't mind but I'd suggest a small amendment to what you said about division though... division is usally grouped with multiplication as addition is with subtraction. The actions of multiplying versus division should take around the same time as they're basically the same process, just done in reverse. Try doing a few of them on paper and you'll see how similar the process is. It's just that as humans we don't deal with negative numbers easily hence division tends to give us a mental block.... Computers deal with negative numbers with a little less bias ;-)

To amend your analogy, long multiplication should take the average computer the same time as long division. Subtraction and addition should take the same time as each other too, but it's much easier add/subtract than it is to multiply/divide. So yes, Ned was right, good analogy, just swayed a little by being human!
ID: 355163 · Report as offensive
Odysseus
Volunteer tester
Avatar

Send message
Joined: 26 Jul 99
Posts: 1808
Credit: 6,701,347
RAC: 6
Canada
Message 355431 - Posted: 3 Jul 2006, 19:31:24 UTC - in response to Message 355163.  
Last modified: 3 Jul 2006, 19:33:23 UTC

@PoohBear - I hope you don't mind but I'd suggest a small amendment to what you said about division though... division is usally grouped with multiplication as addition is with subtraction. The actions of multiplying versus division should take around the same time as they're basically the same process, just done in reverse. Try doing a few of them on paper and you'll see how similar the process is. It's just that as humans we don't deal with negative numbers easily hence division tends to give us a mental block.... Computers deal with negative numbers with a little less bias ;-)

If we have difficulty understanding negative numbers, that will manifest when we’re doing subtraction, rather than division, making it seem harder than addition. If a computer language makes use of an ‘unsigned integer’ data type (thereby saving a bit per word), subtraction of a larger number from a smaller will produce an error—or require a type-conversion step to make the result an ordinary signed integer.

But here we’re talking about floating-point operations, and AFAIK you’re quite right that division and multiplication are symmetrical in this realm. I suspect Pooh was thinking of integer multiplication, which is always safe (barring an overflow) in comparison to division, never producing a floating-point result. Likewise, when people find division more difficult than multiplication, it’s likely because their grasp of fractions is weaker than of integers.
ID: 355431 · Report as offensive
Profile Diego -=Mav3rik=-
Avatar

Send message
Joined: 1 Jun 99
Posts: 333
Credit: 3,587,148
RAC: 0
Message 355628 - Posted: 3 Jul 2006, 22:51:46 UTC - in response to Message 355163.  

Hiya,

I guess this has all been said by people far greater in intelligence than I but here are the ideas I haven't seen trashed as imbalanced......yet!

I thought I was intelligent, but I must be a complete moron because I still can't understand why people bitch about credits. ;)

I guess I don't really care either way but it would be nice if it could all be 'fair', still allow for the optimisers and not penalise those that don't have the facilities to contribute much.

But it IS all fair.
And for crying outloud, no matter what method is used to calculate credits, a fast computer will always make more credits than a slow one.
I don't thing it's so hard to understand, but oh well, I remember now, I must be a complete moron. ;)

Peace.
/Mav

We have lingered long enough on the shores of the cosmic ocean.
We are ready at last to set sail for the stars.

(Carl Sagan)
ID: 355628 · Report as offensive
pooter
Avatar

Send message
Joined: 8 May 05
Posts: 184
Credit: 8,081
RAC: 0
Message 356154 - Posted: 4 Jul 2006, 5:54:58 UTC - in response to Message 355163.  
Last modified: 4 Jul 2006, 5:55:42 UTC

<edited>
nem·e·sis (nĕm'ĭ-sĭs) pronunciation
n., pl. -ses (-sēz').

1. An opponent that cannot be beaten or overcome.

2. One that inflicts retribution or vengeance.
ID: 356154 · Report as offensive
Profile m.mitch
Volunteer tester
Avatar

Send message
Joined: 27 Jun 01
Posts: 338
Credit: 127,769
RAC: 0
Australia
Message 356368 - Posted: 4 Jul 2006, 10:24:15 UTC - in response to Message 356154.  

<edited>


That's probably the most intelligent thought anyone's had on these boards. I wonder if that includes me? ;-D




Click here to join the #1 Aussie Alliance in SETI
ID: 356368 · Report as offensive
Profile Keith Jones

Send message
Joined: 25 Oct 00
Posts: 17
Credit: 5,266,279
RAC: 0
United Kingdom
Message 356392 - Posted: 4 Jul 2006, 10:45:03 UTC - in response to Message 355431.  
Last modified: 4 Jul 2006, 10:45:32 UTC


If we have difficulty understanding negative numbers, that will manifest when we’re doing subtraction, rather than division, making it seem harder than addition. If a computer language makes use of an ‘unsigned integer’ data type (thereby saving a bit per word), subtraction of a larger number from a smaller will produce an error—or require a type-conversion step to make the result an ordinary signed integer.

But here we’re talking about floating-point operations, and AFAIK you’re quite right that division and multiplication are symmetrical in this realm. I suspect Pooh was thinking of integer multiplication, which is always safe (barring an overflow) in comparison to division, never producing a floating-point result. Likewise, when people find division more difficult than multiplication, it’s likely because their grasp of fractions is weaker than of integers.


Cool... I was trying to adapt it to a better example, I obviously failed miserably ;-} Note to self : must try harder! PoohBear doesn't explicitly mention subtraction so maybe we're both getting the wrong intentions from PoohBear's words. Ah well eh ?

I think fractions are an *issue*, but why they're an issue might be interesting to think about ;-) This isn't an attempted hijack so I'd better leave my comment as this and only reply out of politeness if you respond.

I see what you mean, but step back in time a bit to learning maths. A leap not hard for me ;-)

CPU's typically mimic fundamental processes. We consider multiplication at that young age as adding numbers together a multiple of times (Times table). Essentially a CPU does the same, with a few more *smarts* involved because everyday numbers have no limits and CPU's do. Hence overflow flags as you mention.

Division is also taught along similar lines, we break it down into subtracting and carrying numbers. CPU's, as you say, have sign bits.

I guess what I was trying to say is how we derive fractions is fundamentally derived from our experiences adding and subtracting. It's the overflows and underflows that we calculate the fractionals from.

As we see big integers but not negative numbers easily in the real world. it may well be be why we have problems with fractionals as well.

How's that for saying we're BOTH right ? ;-)

Regards,
Keith
ID: 356392 · Report as offensive
Profile Keith Jones

Send message
Joined: 25 Oct 00
Posts: 17
Credit: 5,266,279
RAC: 0
United Kingdom
Message 356419 - Posted: 4 Jul 2006, 11:35:47 UTC - in response to Message 355628.  
Last modified: 4 Jul 2006, 11:36:59 UTC

Hiya,

I guess this has all been said by people far greater in intelligence than I but here are the ideas I haven't seen trashed as imbalanced......yet!

I thought I was intelligent, but I must be a complete moron because I still can't understand why people bitch about credits. ;)


I'm so glad I said "yet" ;-)

I forgot to quote The Gas Giant's stuff so I guess my message could have looked like a general bitch but did I actually bitch? I thought I was summarising and saying that all the credit ideas had issues for someone. I guess *I* must be the moron... ;-)

I guess I must also be imagining things; like all the others discussing 'unfairness' of credit and wanting a response ;-)

I guess I must also be a credit bunny and NOT trying to do as much work, as fast, as possible to get science results for the future. Why would I ever want to have a credit system to measure myself against others to make sure I am doin the best I can? ;-)

Come on... a little faith please ?

I guess I don't really care either way but it would be nice if it could all be 'fair', still allow for the optimisers and not penalise those that don't have the facilities to contribute much.

But it IS all fair.
And for crying outloud, no matter what method is used to calculate credits, a fast computer will always make more credits than a slow one.
I don't thing it's so hard to understand, but oh well, I remember now, I must be a complete moron. ;)


Peace.


Not to put to fine a point on it, I'm getting the feeling you don't actually give a jot about credits. If so, why comment at all ? Surely it'd be something you'd just ignore?

As for the fast computer bit... well ermm.. I can hardly argue with that concept... but I don't see the relevance to what I was saying. Feel free to elucidate...

I'll re-read what I wrote and see whether I've been misleading

...and tranquility,

Keith


ID: 356419 · Report as offensive
Profile SwissNic
Avatar

Send message
Joined: 27 Nov 99
Posts: 78
Credit: 633,713
RAC: 0
United Kingdom
Message 356431 - Posted: 4 Jul 2006, 11:59:51 UTC - in response to Message 355163.  
Last modified: 4 Jul 2006, 12:04:15 UTC

FLOPs are dependant on architecture of the CPU, efficiency of brand, and use of optimisations of extended instructions etc ie P3 vs P4, AMD vs Intel, MMX vs SSE2. It would be extremely hard to map every kind of processor fairly to get an average FLOP.


Not sure I agree. A Floating Point Operation is a mathmatical calculation, and has nothing to do with the hardware perfoming the calculation... If I "2+2" in my head, on an abacus or on a calculator, the answer is still 4.

So if a million calculations needs to be performed, and say that is worth 10 credits - then if a slow outdated antique excuse for a computer takes 3 years to perform those calcs, and a heavily optimised, overclocked, water cooled monster takes 3 secs to do the same work, both machines should still only receive 10 credits.

CPU time, however, is also influenced by CPU efficiency, extended instructions etc. It's not simple to just consider it is a measure of how many cycles your cpu has donated to the task. Some of those cycles can do a lot more with extended instructions etc. So again taking CPU time as a lump figure doesn't work fairly. More importantly it doesn't allow people to see optimisation at work. Hence it makes it hard for the devs to see real life feedback on the core client's effectiveness and therefore the bulk of work being done on their application.


Yeah - but not sure I really see the relevance of this argument...

The last(?) option is for the Seti guys to analyze the data, pick an arbitrary value for the average WU and give the same credit for each WU across the board. Sounds like going back to Seti Classic concepts and it has its own problems but they seem less of a 'fairness' issue.


This does not adress cross-project work, and is therefore unworkable... sorry.

The main issue is to prevent the obvious 'cherry picking'. I know people will cherry pick good WUs for a while but it's bound to show in the 'aborts' showing up for an account. There's already a crude penalty system in place to reduce WUs sent to an individual showing failures and aborts. It just needs to be watched....


I like my credit and seeing my name shoot up the tables on BoincStats, but this "cherry-picking" issue really doesn't bother me at all... If some nerd has the time to watch every WU coming into his/her farm and cherry-pick the fastest - fine! I actually have a life, and am not _THAT_ bothered about it...

The secondary issue is that it blocks the idea of throwing computations to appropriately powerful processors. Then again, that's only really useful for amazingly long single calculations or time critical applications.

This seems to be possible with the approach the team are adopting (ie by asking for un-optimised client results to be reported). This could allow them to produce the required averages....

It's basically a rock and a hard place decision.

I guess I don't really care either way but it would be nice if it could all be 'fair', still allow for the optimisers and not penalise those that don't have the facilities to contribute much. That to me says the last option.. to others it doesn't.... again... such is life ;-)

My tuppence,

Keith


Because of the type of maths involved, there is no way to predict precisely the amount of FLOPs required to produce a result, and as such, a single WU cannot be pre-assigned an amount of credit. Equally, as different architectures handle FLOP's in different ways, the amount of operations to complete a mathematical calculation might vary, and therefore a real-time count will be inaccurate, but probably the closest to the mark.

At the end of the day, life isn't fair (Abramovich proves this point I think), and I think the BOINC team are getting it fairly spot-on. If there is any favouritism toward people running slow machines with standard clients (but who happen to be big loud-mouths at complaining - not mentioning any names mmciastro!!!) over people who donate the time to optimise apps and run fast machine, then I would feel the project is shooting itself in the foot. A fast optimised machine is the BEST platform to turn-around WU's - and that, at the end of the day, is what the project is all about...



ID: 356431 · Report as offensive
Profile Saenger
Volunteer tester
Avatar

Send message
Joined: 3 Apr 99
Posts: 2452
Credit: 33,281
RAC: 0
Germany
Message 356432 - Posted: 4 Jul 2006, 11:59:57 UTC - in response to Message 353143.  

That results page was for the same computer.

In my ideal world if I crunch for 1 cpu hour I expect to get the same amount of credit per hour no matter which wu I crunch or which project I crunch.

And how do you calculate a CPU hour. Is the project your running getting 90% 80% 50% or 2% of the CPU time or those it fluctuate all the time?

I'd take the time as in BOINC. By my observation, it's slow when the puter does other work, and nearly like the wall clock when nothing else is done.

A puter that runs one projekts WUs for different times (with the same setup of BOINC/App/OS) should claim more for the longer crunching time. It's irrelevant what the App does intern, that's not my problem. I donate the computer power, and if it takes this long because of the projects sloppy programming, I should not be "punished". My puter is the same, my effort is the same, my setup is the same so my credits have to be the same.

IMHO the first way of calculation was in theory the right one:
Look at how powerful a puter is and give it an according value (benchmark) and multiply it with it's effort (CPU-time). Include a calibrating factor to make the numbers nice if you want.
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
ID: 356432 · Report as offensive
Ulrich Metzner
Volunteer tester
Avatar

Send message
Joined: 3 Jul 02
Posts: 1256
Credit: 13,565,513
RAC: 13
Germany
Message 356523 - Posted: 4 Jul 2006, 13:12:46 UTC
Last modified: 4 Jul 2006, 13:16:59 UTC

The new crediting system is - without the slightest doubt - extremely unfair for the so called long WUs (= VLAR WUs).

The situation is also exaggerated by those raisin picking people, who abort long WUs on their machines and therefore additionally discriminate people - like me - that are 'stupid' enough to crunch those long ones for the pure sake of science... :/

I'm discomposed about the new crediting system, but i'm really pissed off by those raisin pickers - really! >:(

[edit] In case anyone want's a proof: http://setiathome.berkeley.edu/forum_thread.php?id=32093 :(
Aloha, Uli

ID: 356523 · Report as offensive
cdr100560
Avatar

Send message
Joined: 12 May 06
Posts: 681
Credit: 65,502
RAC: 0
United States
Message 356674 - Posted: 4 Jul 2006, 15:04:52 UTC

Is that what I'm seeing? For the past two weeks, my RAC has dropped like a stone and I am not accumulating total credit at the same rate. The PC is relatively fast, and there's only enough work cached for about a day's crunching. (530 Prescott, 2Gb DDR2 533, about 10 WU's in cache) and the rig runs almost constantly except for brief periods. First image is RAC (not primary function of doing work, but it is a benchmark) dropping like a stone;

But I am accumulating credit. It's hard to see, but there is a peak to the rate it's accumulating at around the same time (two weeks ago)

Wu's have been taking about 4-6 hrs to finish, which is not unusual. Is this what we're seeing across the board? I understand the basics behind the angle range. I'm still waiting for the new work. (keeping fingers crossed!)
ID: 356674 · Report as offensive
Profile Keith Jones

Send message
Joined: 25 Oct 00
Posts: 17
Credit: 5,266,279
RAC: 0
United Kingdom
Message 357066 - Posted: 4 Jul 2006, 19:50:46 UTC - in response to Message 356431.  
Last modified: 4 Jul 2006, 20:07:00 UTC

FLOPs are dependant on architecture of the CPU, efficiency of brand, and use of optimisations of extended instructions etc ie P3 vs P4, AMD vs Intel, MMX vs SSE2. It would be extremely hard to map every kind of processor fairly to get an average FLOP.


Not sure I agree. A Floating Point Operation is a mathmatical calculation, and has nothing to do with the hardware perfoming the calculation... If I "2+2" in my head, on an abacus or on a calculator, the answer is still 4.


I have the feeling my quoting is gonna get messy :-)

I guess that's a definition thing. I was led to believe the FLOP measurement was the same as defined by the CPU manufacturers. If that's so there's a wide difference in how many FLOPS your CPU does by machine cycle between model, make and series. It's also complicated by those extended instructions (MMX,SSE2)that can process bunches of them quickly. So maybe someone can clarify for both us? It would save me getting the wrong picture. Does anyone know if the Seti/Boinc FLOP is a synthetic thing or related to machine code concepts?

So if a million calculations needs to be performed, and say that is worth 10 credits - then if a slow outdated antique excuse for a computer takes 3 years to perform those calcs, and a heavily optimised, overclocked, water cooled monster takes 3 secs to do the same work, both machines should still only receive 10 credits.


Yep I totally agree if the above definition is my problem :-) If it's not then what if an optimised app sees 10 FLOPS in a row doing the same thing. It will optimise the process and throw them to an extended instruction. What you don't want is the framework to miss this and credit too high or low for the FLOPs involved. It becomes a programming nightmare and could easily cause credit claim issues. As you said you've done the work you should get the right credit for it.

CPU time, however, is also influenced by CPU efficiency, extended instructions etc. It's not simple to just consider it is a measure of how many cycles your cpu has donated to the task. Some of those cycles can do a lot more with extended instructions etc. So again taking CPU time as a lump figure doesn't work fairly. More importantly it doesn't allow people to see optimisation at work. Hence it makes it hard for the devs to see real life feedback on the core client's effectiveness and therefore the bulk of work being done on their application.


Yeah - but not sure I really see the relevance of this argument...


Did it make more sense above ? I hope it makes it seem a bit more relevant. :-)

The last(?) option is for the Seti guys to analyze the data, pick an arbitrary value for the average WU and give the same credit for each WU across the board. Sounds like going back to Seti Classic concepts and it has its own problems but they seem less of a 'fairness' issue.


This does not adress cross-project work, and is therefore unworkable... sorry.


To put it blunt, why? I'm not trying to be brash or anything. I'm happy to learn and change my views. I do need something to work with though ;-)

A good calculated average credit per WU, and the work involved in finding it simplifies the comparison doesn't it? We're calculating stuff to a model, so let's build one about how we do it ;-)


The main issue is to prevent the obvious 'cherry picking'. I know people will cherry pick good WUs for a while but it's bound to show in the 'aborts' showing up for an account. There's already a crude penalty system in place to reduce WUs sent to an individual showing failures and aborts. It just needs to be watched....


I like my credit and seeing my name shoot up the tables on BoincStats, but this "cherry-picking" issue really doesn't bother me at all... If some nerd has the time to watch every WU coming into his/her farm and cherry-pick the fastest - fine! I actually have a life, and am not _THAT_ bothered about it...


True, I hope I have a life too ;-). Maybe some nerd could automate the process and what if he then has a farm or supports a team? There's not likely to be awards at the end of this but you never know ;-) You wouldn't want to feed the press bad stories and I'm sure everyone would appreciate not having another potential accusation thrown their way.

The secondary issue is that it blocks the idea of throwing computations to appropriately powerful processors. Then again, that's only really useful for amazingly long single calculations or time critical applications.

This seems to be possible with the approach the team are adopting (ie by asking for un-optimised client results to be reported). This could allow them to produce the required averages....

It's basically a rock and a hard place decision.

I guess I don't really care either way but it would be nice if it could all be 'fair', still allow for the optimisers and not penalise those that don't have the facilities to contribute much. That to me says the last option.. to others it doesn't.... again... such is life ;-)

My tuppence,

Keith


Because of the type of maths involved, there is no way to predict precisely the amount of FLOPs required to produce a result, and as such, a single WU cannot be pre-assigned an amount of credit. Equally, as different architectures handle FLOP's in different ways, the amount of operations to complete a mathematical calculation might vary, and therefore a real-time count will be inaccurate, but probably the closest to the mark.


Precisely what I was saying, sort of :-). I was looking that it might be easier producing the average of a lot of WU from the databases rather than relying on worrying about the end machine as such. You can always break it down on CPU ID etc. and I'm sure optimised clients can be filtered on their replies. It might even give valued information on the teams own optimisation efforts. Dunno, what do you think ?


At the end of the day, life isn't fair (Abramovich proves this point I think), and I think the BOINC team are getting it fairly spot-on. If there is any favouritism toward people running slow machines with standard clients (but who happen to be big loud-mouths at complaining - not mentioning any names mmciastro!!!) over people who donate the time to optimise apps and run fast machine, then I would feel the project is shooting itself in the foot. A fast optimised machine is the BEST platform to turn-around WU's - and that, at the end of the day, is what the project is all about...


Yeh, life is unfair. It doesn't mean we shouldn't try to change that though ;-)
The team ARE good, I would never say otherwise! I was hoping maybe just going back to basics might trigger some more fundamental thinking rather than chewing through stop-gaps and making things more complex.


Here's where I find out if my manual quoting worked ;-)

[EDIT} Only 7 goes at balancing those quotes..sheesh![/EDIT]

Regards,

Keith
ID: 357066 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 357156 - Posted: 4 Jul 2006, 20:43:44 UTC - in response to Message 356419.  


But it IS all fair.
And for crying outloud, no matter what method is used to calculate credits, a fast computer will always make more credits than a slow one.
I don't thing it's so hard to understand, but oh well, I remember now, I must be a complete moron. ;)


Peace.


Not to put to fine a point on it, I'm getting the feeling you don't actually give a jot about credits. If so, why comment at all ? Surely it'd be something you'd just ignore?

It seems intuitively obvious that we have at least two or three different meanings for the word "fair"

When I read "fair" I think "equitable" -- that everyone has an equal chance of getting "easy" and "hard" work units, and that everyone who does a work unit gets the same credit.

Clearly, some read "fair" and see "equal" -- that an hour of crunching will return the same number of credits no matter how many "easy FLOPs" and "hard FLOPs" are involved.

Some read "fair" as "no less than I used to get when I ran an optimized app."

... and as a result, we're often talking past each other when we start saying "fair."

I think "equitable" is the best meaning....
ID: 357156 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · Next

Message boards : Number crunching : Really not good!


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.