Observation of CreditNew Impact


log in

Advanced search

Message boards : Number crunching : Observation of CreditNew Impact

Previous · 1 . . . 8 · 9 · 10 · 11 · 12 · 13 · 14 . . . 16 · Next
Author Message
Profile James Sotherden
Avatar
Send message
Joined: 16 May 99
Posts: 8517
Credit: 31,063,788
RAC: 53,655
United States
Message 1381703 - Posted: 16 Jun 2013, 4:20:56 UTC

All you guys are right. I was looking for the word unifeid and thats why I thought that it was a manual install. Thtas what I get for thinking.
____________

Old James

Profile Mike
Volunteer tester
Avatar
Send message
Joined: 17 Feb 01
Posts: 23324
Credit: 31,684,997
RAC: 23,693
Germany
Message 1381739 - Posted: 16 Jun 2013, 7:16:18 UTC - in response to Message 1381690.

You miss the point.
In 2004 cpus used pretty much the same power idle or full throttle.
Boinc was touted as harvesting a waste product, nop cycles.

Today data processing is so much more efficient.
People who let their computer run for 3 days to do an AP WU, when an GPU can do the same Wu in 1/2 an hour aren't doing science, but rather destroying the environment.

Boinc is an abomination encouraging inefficient processing.

A dedicated FPGA cluster with optimized programming could do all of seti@home's crunching at a small fraction of what the Boincseti@home energy footprint is.

Boinc isn't distributed computing it's distributed cost with no one minding the pennies.


Neither Boinc nor seti was designed to run 24/7.
It is designed to use unused cycles while you are on your computer anyways.
What we are doing is nothing but our own decision.
Nothing we can blame boinc for IMHO.

____________

Profile Wiggo
Avatar
Send message
Joined: 24 Jan 00
Posts: 6452
Credit: 90,145,637
RAC: 73,392
Australia
Message 1381746 - Posted: 16 Jun 2013, 7:27:45 UTC - in response to Message 1381739.

Neither Boinc nor seti was designed to run 24/7.
It is designed to use unused cycles while you are on your computer anyways.
What we are doing is nothing but our own decision.
Nothing we can blame boinc for IMHO.

+1

Just to annoy someone...

Cheers.

Profile Ageless
Avatar
Send message
Joined: 9 Jun 99
Posts: 12258
Credit: 2,544,727
RAC: 218
Netherlands
Message 1381747 - Posted: 16 Jun 2013, 7:29:39 UTC - in response to Message 1381690.

A dedicated FPGA cluster with optimized programming could do all of seti@home's crunching at a small fraction of what the Boincseti@home energy footprint is.

Einstein@Home uses their own built Atlas cluster.
As per BM: Atlas currently has 6720 cores in 1680 nodes. Add 66 GPU nodes with 4 CPU cores and 4 NVIDIA Tesla cards (C1060 and C2050) each.

And they still need our help.

____________
Jord

Fighting for the correct use of the apostrophe, together with Weird Al Yankovic

297902
Volunteer tester
Send message
Joined: 31 Dec 99
Posts: 1009
Credit: 5,513,963
RAC: 187
Uruguay
Message 1381764 - Posted: 16 Jun 2013, 8:10:34 UTC - in response to Message 1381739.

Neither Boinc nor seti was designed to run 24/7.
It is designed to use unused cycles while you are on your computer anyways.
What we are doing is nothing but our own decision.
Nothing we can blame boinc for IMHO.

Seti only runs through Boinc.
The Boinc paradigm is what should be questioned.
In 2004 cpus were wasteful, not much difference between loaded cpus and cpus doing nops. An argument could be made for harvesting free computing power.

10 years later we have GPUs, 8 core CPUs, 100 watt sleep mode, and 1000 watt gangbuster mode.

Boinc isn't harvesting spare cpu cycles, it's pushing distributed computing at horrific inefficiencies.

The Billion Dollar Computation argument isn't far off.
Boinc really isn't about science, it's social.

It used to be a running joke that the seti@home participants couldn't know if they were really doing calculations for the CIA.
I'm sure there are Boinc projects sending work out to witless crunchers, attracted by stellar credit returns, that amounts to nothing more than Bitcoin mining.

Boinc is at an impass.
If Boinc can't even address the concerns of a dedicated base, and turns a blind eye to flagrant abuse of it's design principles, then it may be irrelevant.

Seti@home worked fine before Boinc.
If anything setizens have been supporting Boinc at no benefit to themselves.


____________
My Halloween costume was so good they sentenced me to 25 Years to Life.

Profile Wiggo
Avatar
Send message
Joined: 24 Jan 00
Posts: 6452
Credit: 90,145,637
RAC: 73,392
Australia
Message 1381770 - Posted: 16 Jun 2013, 8:36:46 UTC - in response to Message 1381764.

You don't have to run anything, just turn your computer off and save money. ;-)

Cheers.

297902
Volunteer tester
Send message
Joined: 31 Dec 99
Posts: 1009
Credit: 5,513,963
RAC: 187
Uruguay
Message 1381772 - Posted: 16 Jun 2013, 8:41:14 UTC - in response to Message 1381747.

A dedicated FPGA cluster with optimized programming could do all of seti@home's crunching at a small fraction of what the Boincseti@home energy footprint is.

Einstein@Home uses their own built Atlas cluster.
As per BM: Atlas currently has 6720 cores in 1680 nodes. Add 66 GPU nodes with 4 CPU cores and 4 NVIDIA Tesla cards (C1060 and C2050) each.

And they still need our help.


The Atlas cluster has been there for years.
It's a general purpose cpu cluster, old hardware, just now adding GPUs.
I believe the FPGA cluster designed for the Allen Array would put the seti@home base to shame for dedicated processing.
____________
My Halloween costume was so good they sentenced me to 25 Years to Life.

Grant (SSSF)
Send message
Joined: 19 Aug 99
Posts: 5685
Credit: 56,143,795
RAC: 49,769
Australia
Message 1381775 - Posted: 16 Jun 2013, 8:46:38 UTC - in response to Message 1381764.
Last modified: 16 Jun 2013, 9:14:05 UTC

10 years later we have GPUs, 8 core CPUs, 100 watt sleep mode, and 1000 watt gangbuster mode.

Making up numbers doesn't help your argument, i'd suggest you try using some actual power use values.


Boinc isn't harvesting spare cpu cycles, it's pushing distributed computing at horrific inefficiencies.

Can't argue with you there as you don't appear to understand what distributed computing is about.
Once again, the facts appear irrelevant to you.


Boinc really isn't about science, it's social.

You're confusing BOINC, with projects that use it.
BOINC isn't about science- it's about distributed computing.
For most people, it's about the project. For others the Credits. For some the social aspects would be important.


It used to be a running joke that the seti@home participants couldn't know if they were really doing calculations for the CIA.

Only amongst the clueless, paranoid & clinically disturbed.


I'm sure there are Boinc projects sending work out to witless crunchers, attracted by stellar credit returns, that amounts to nothing more than Bitcoin mining.

There probably are, but then it's a reflection of the people that choose those projects over others than any reflection of the BOINC framework.


Seti@home worked fine before Boinc.

And there is the most ignorant & ludicrous statement you have made so far. The problems with the original pre-BOINC project have been gone over many times in the past.

Tis truely the season for nutters.
____________
Grant
Darwin NT.

Profile Chris S
Volunteer tester
Avatar
Send message
Joined: 19 Nov 00
Posts: 31007
Credit: 11,201,152
RAC: 19,710
United Kingdom
Message 1381792 - Posted: 16 Jun 2013, 10:28:52 UTC

Neither Boinc nor seti was designed to run 24/7.
It is designed to use unused cycles while you are on your computer anyways.
What we are doing is nothing but our own decision.
Nothing we can blame boinc for IMHO.

You are quite correct Mike in that Seti and Boinc were designed and ORIGINALLY INTENDED to run in the background utilising unused CPU cycles, that was the planned scenario for it. But fast forward a decade, and the simple fact is that the project has grown like Topsy, and probably 90% of crunchers ARE running it 24/7, and using GPU's not just CPU's. Personally I don't think that the Boinc Devs are really keeping pace with what most of us are doing and seem to want these days, I'm not blaming Dr. A, although some do.

But in the final analysis the Admins run this scientific project in the way that they want, and we are invited to join in if we wish. Having said that there is no real THEM and US, because we all have the opportunity via these threads to suggest improvements or enhancements that we would like to see. We can also help out by joining Seti Beta. I just think that the attitude here of some people needs adjusting, the view that "I give my time & money & CPU cycles to this project, therefore I have a right to say what goes on" is just not valid. To detractors I say, take your ball and go home, and leave the rest of us to play happily without you.

I'll support Eric and the gang unreservedly and I'm pleased to be part of it all.


____________
Damsel Rescuer, Kitty Patron, Uli Devotee, Julie Supporter
ES99 Admirer, Raccoon Friend, Anniet fan, 1% badge, HSA


Eric Korpela
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar
Send message
Joined: 3 Apr 99
Posts: 1085
Credit: 8,267,319
RAC: 7,563
United States
Message 1381896 - Posted: 16 Jun 2013, 18:22:51 UTC

I suppose I should make a couple points.

First, "CreditNew" has been implemented at SETI@home for a year now, so this reduction in credit is not due to implementing "CreditNew" since it was implemented ages ago.

Second, how credits are normalized hasn't changed between v6 and v7 under CreditNew. Since our results are mostly stock Windows and CUDA under Windows, those results set the normalization.

Based upon the archives, the S@H 6 windows_intel app was generating an average of 0.00556 credits per elapsed second. The current S@H 7 windows_intel app is generating 0.00514 credits per elapsed second, or about 7% less. Astropulse 6 is currently generating 0.00491 credits per elapsed second or about 12% less than S@H v6. I'm hoping the Astropulse issue is resolving itself. (It appears to be slowly coming back to normal.)

There aren't a whole lot of knobs I can turn to get that 7% back. I've bumped the estimated GPU efficiency by 20% in hopes that that would help, but thus far I haven't seen a change. I'm going to try increasing the workunit work estimates slowly, but I think that change will get normalized out.

Yes, there are projects that offer more credit. A number of projects have chosen to detach their credits from any measure of actual processing, usually by pretending they are getting 100% efficiency out of GPUs. I fought that battle for years, and lost. Those projects have entirely devalued the BOINC credit system.

____________

Alinator
Volunteer tester
Send message
Joined: 19 Apr 05
Posts: 4178
Credit: 4,647,982
RAC: 0
United States
Message 1381918 - Posted: 16 Jun 2013, 19:06:55 UTC - in response to Message 1381896.

Awwwwww...... Eric,

You should know better than to try and cloud these credit 'discussions' with logic and rationality.

That takes all the sport and comic relief out of them when they erupt! :-D

msattler
Volunteer tester
Avatar
Send message
Joined: 9 Jul 00
Posts: 38154
Credit: 556,411,263
RAC: 596,702
United States
Message 1381920 - Posted: 16 Jun 2013, 19:09:59 UTC

Thanks for the insights, Eric.

My RAC appears to be trying some sort of soft recovery, but it would have to do an awful lot of climbing to get even close to where it was before.

No matter....where ever it shakes out is OK for the kitties.
____________
*********************************************
Embrace your inner kitty...ya know ya wanna!

I have met a few friends in my life.
Most were cats.

Profile tullio
Send message
Joined: 9 Apr 04
Posts: 3567
Credit: 361,515
RAC: 205
Italy
Message 1381922 - Posted: 16 Jun 2013, 19:15:32 UTC

Much ado about nothing. This is my opinion.
Tullio
____________

ExchangeMan
Volunteer tester
Send message
Joined: 9 Jan 00
Posts: 108
Credit: 124,011,424
RAC: 94,459
United States
Message 1381927 - Posted: 16 Jun 2013, 19:27:30 UTC

According to the statistics tab in the Boinc Manager, the RAC for my big cruncher bottomed out 7 days ago and is in a slow, but steady climb. Where it will top out, I don't know. But I would be very skeptical that it would come within 7% of my peak RAC ever achieved for this host, but you never know. I would be very pleased if that happened, however.

I believe that one poster mentioned in some thread about the number of work units achieving over 100 credits. Back with MB V6, getting over 100 credits was very common. Now it's perhaps 1 work unit in 20. With V6 it was about 50%.

The credits are all relative since we're all crunching work units from the same pool, but it does make for interesting crunching and a way to compare hosts and configurations over time.

____________

bill
Send message
Joined: 16 Jun 99
Posts: 859
Credit: 22,296,536
RAC: 19,625
United States
Message 1381961 - Posted: 16 Jun 2013, 21:04:45 UTC - in response to Message 1381896.

Thank You! Thank You! Thank You!
It's nice to see that actions are being
taken by somebody to put some balm on those
terribly wounded RACs.

sleepy
Avatar
Send message
Joined: 21 May 99
Posts: 75
Credit: 21,384,518
RAC: 19,897
Italy
Message 1381974 - Posted: 16 Jun 2013, 21:49:44 UTC - in response to Message 1381896.

Dear Eric,
thank you very much for devoting some precious time to this (minor) issue.

Just a question: when you talk about S@H V6 and V7, are you talking about stock applications?
Because here almost everybody was on optimised applications on V6, which had about double throughput of stock applications.

If now S@H V6 (stock) and V7 (stock, but abundantly optimised) get about the same credit/s, it means that for the same amount of work time we eventually get about half the credits. Which is exactly what we are experiencing.

Of course it is good for the project that now everybody is running on what used to be optimised+some more scientific goodies, but nevertheless this would confirm that we are now getting about half the credit we did before.

I hope I am not too wrong and not too much unclear. I myself had to read this several times and change some wordings because it would obscure for me as well... Open to explain better if necessary (and of course if not totally wrong!)!

And in any case thank you very much for the insight and for explaining us how things are developing.
And for your dedication to the project, proven in countless occasions!

All the very best,
Sleepy
____________

Grant (SSSF)
Send message
Joined: 19 Aug 99
Posts: 5685
Credit: 56,143,795
RAC: 49,769
Australia
Message 1382061 - Posted: 17 Jun 2013, 6:43:28 UTC - in response to Message 1381920.

Thanks for the insights, Eric.

My RAC appears to be trying some sort of soft recovery, but it would have to do an awful lot of climbing to get even close to where it was before.

No sign of recovery here, although it does appear to be falling at a slower rate than it has been.

____________
Grant
Darwin NT.

Profile Jim_S
Avatar
Send message
Joined: 23 Feb 00
Posts: 4472
Credit: 18,281,651
RAC: 6,066
United States
Message 1382085 - Posted: 17 Jun 2013, 8:33:04 UTC

I'm Going Down With The Ship! And She's Still taking on water as We All slip deeper into the Abyss. ;-b....
____________

I Desire Peace and Justice, Jim Scott

Profile ML1
Volunteer tester
Send message
Joined: 25 Nov 01
Posts: 8263
Credit: 4,070,696
RAC: 426
United Kingdom
Message 1382099 - Posted: 17 Jun 2013, 10:50:12 UTC - in response to Message 1382085.
Last modified: 17 Jun 2013, 10:59:33 UTC

I'm Going Down...! ...We All slip deeper into the Abyss. ;-b....

Fear not. It is all naught but a splash of statistics.

The ripples will all settle as the "plop!" from the pebble of the new optimised app rolls by. The numbers will settle oncemore just as the disturbed mud at the bottom of the pond will oncemore settle.

We should then find that the old machines will be pushed a little deeper into the mud as the newer machines bob and float upon the new level.

Such is how "CreditNew" has been designed to work. We are all (vaguely) measured against the median average of all machines.


Happy ever-faster crunchin'!
Martin
____________
See new freedom: Mageia4
Linux Voice See & try out your OS Freedom!
The Future is what We make IT (GPLv3)

Ingleside
Volunteer developer
Send message
Joined: 4 Feb 03
Posts: 1546
Credit: 3,576,058
RAC: 0
Norway
Message 1382109 - Posted: 17 Jun 2013, 11:26:58 UTC - in response to Message 1381974.

Just a question: when you talk about S@H V6 and V7, are you talking about stock applications?

Well, he did mention "Since our results are mostly stock Windows and CUDA under Windows, those results set the normalization.", before mentioning the results for the "windows_intel"-application, so you can take for granted it's the stock applications what's giving the credits.

Because here almost everybody was on optimised applications on V6, which had about double throughput of stock applications.

If now S@H V6 (stock) and V7 (stock, but abundantly optimised) get about the same credit/s, it means that for the same amount of work time we eventually get about half the credits. Which is exactly what we are experiencing.

Of course it is good for the project that now everybody is running on what used to be optimised+some more scientific goodies, but nevertheless this would confirm that we are now getting about half the credit we did before.

Yes, as expected the RAC-impact for v7 is similar to when FLOPS-counting was introduced, with roughly 10% real decrease and roughly 90% decrease due to optimized applications isn't 2x faster than stock applications any longer.


____________
"I make so many mistakes. But then just think of all the mistakes I don't make, although I might."

Previous · 1 . . . 8 · 9 · 10 · 11 · 12 · 13 · 14 . . . 16 · Next

Message boards : Number crunching : Observation of CreditNew Impact

Copyright © 2014 University of California