Boycotting a project for TOO MUCH CREDIT???

Message boards : Number crunching : Boycotting a project for TOO MUCH CREDIT???
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
BarryAZ

Send message
Joined: 1 Apr 01
Posts: 2580
Credit: 16,982,517
RAC: 0
United States
Message 830893 - Posted: 15 Nov 2008, 18:25:52 UTC - in response to Message 830847.  

I'm interspersing comments to your message (I've been over in MilkyWay for about 2 and a half months)


[quote]

original code is terrible and doen't even make sense in many segments
wastes a massive time on repeated calculations for no reason

> This is quite true

After the optimized Clients were done, speedups were in the region of like factor 50 and above.

> Also quite true.

Given all the details, the Maintainers of MW for unknown reasons (to me) failed to correct the problem; comments from involved Users sound like they didn't even try.

> Actually, they were limited in resources and have been working on a client to implement the optimized client improvements -- the process has taken far too long but *apparently* will released by the end of this month.

Instead of freely distributing the Optimized Client, it was held back (rightfully fearing cheaters would ruin the project) and available only to a few.

> Well, the folks who did the optimized clients (there were at least two different players with very similar approaches), kept them private or provided them to a very few others.

A cheap Dual Core CPU could easily run at >50000 RAC and above!
Only a handful was kept running as a silent protest against the terrible codebase, pending correction by MW staff.

> Right -- not so silent, the credit numbers spoke very loud.

After months long discussions, the User base of several larger teams apparently now voted to boycott the project, as its staff repeatedly failed to correct the known problem.

> Not quite the case, though a number did leave. What happened is that one of the people who did the optimized code (and both of the optimizers were in contact with the MilkyWay project people) posted his code, making it available (much like the SETI optimized code is available) about 3 or 4 weeks ago. Once that happened all sorts of 'unintended' consequences followed.

Apart from that, MW was already attracting 'credit whores' with its Default Client due to its apparently well-above Crediting.

>Calling folks 'credit whores' is over the top and might put you in conflict the message posting rules here. That being said, even before the optimized client hit the street, the credit awards for MilkyWay were something close to double those for the *optimized* SETI application on Intel processors, and provided an even higher premium for AMD based systems (the SETI application code inadvertantly is such that AMD systems get short shrift -- hasn't stopped me over the years though).



IMHO in a perfect world, every project's Default Clients should indeed strive to give equal credit when being run on the same Host. Some slack is natural but shouldn't exceed certain unwritten limits.
Otherwise, we'd see (and already have seen) individual projects with low TFlops count outrunning Projects that actually have far more active power but give fair credit. (just see BOINCstats global stats table and you'll easily spot them)

> In a perfect world, we'd not have projects with weekly 6 hour outages, 6 to 24 hour post outage recovery cycles, frequent upload/download stalls, periodic black hole long running 0 credit work units, or applications which penalize one manufacturer CPU (AMD) over another (Intel). The world isn't perfect -- when the world is perfect, feel free to toss out the first stone.


ID: 830893 · Report as offensive
BarryAZ

Send message
Joined: 1 Apr 01
Posts: 2580
Credit: 16,982,517
RAC: 0
United States
Message 830894 - Posted: 15 Nov 2008, 18:31:24 UTC - in response to Message 830891.  

You know, you may be right there -- though I wonder, just how much work with the results that the SETI project has generated over the years is being accomplished these days -- I've heard tales which suggest the results are collecting and not being processed. Heck, in the old pre-BOINC days of SETI classic, as they were going into a many months long wind down mode (I was there crunching) work that had already been sent out, returned *and validated* was being sent out again, as much to keep the population of SETI classic folks happy while the most serious issues of BOINC were being dealt with.



I'm going to say something here that may upset a few and it is from the MilkyWay@Home project home page.
This particular project is being developed to better understand the power of volunteer computer resources.
I believe all this controversy over credits and the inefficiency in the stock application is and continues to be intentional. I'm not convinced the project was created to do astronomical science. The papers cited on the home page are all about computer projects, not about galaxy modeling. The project is being run by their computer science department not their astronomy department. I think the thing being studied is called BOINC. I think one of the things being studied is how credit awarding effects the computing power given to a project.



ID: 830894 · Report as offensive
Profile Gary Charpentier Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 25 Dec 00
Posts: 30685
Credit: 53,134,872
RAC: 32
United States
Message 830920 - Posted: 15 Nov 2008, 20:08:21 UTC - in response to Message 830894.  

I was around in the Classic days too. They ran out of telescope data because way to many people wanted to crunch. Life. Sometimes you get too much good.

As for data being checked, NTPCKR isn't the real data analysis application, but it will help. I know the data we return is being placed into a database. I am sure that from time to time Dr. K checks the database for exceptionally high results. The question you have is how often. Does it matter if it is once a week or once a decade? Science isn't fast. And even if a exceptional high spot is found, that isn't detection of ET. Next you have to get exclusive telescope time to look at the spot with other methods to see if the signal is still there. Then if it is, you need to have someone else at a different telescope take a look and confirm. Then you can announce ET.

You know, you may be right there -- though I wonder, just how much work with the results that the SETI project has generated over the years is being accomplished these days -- I've heard tales which suggest the results are collecting and not being processed. Heck, in the old pre-BOINC days of SETI classic, as they were going into a many months long wind down mode (I was there crunching) work that had already been sent out, returned *and validated* was being sent out again, as much to keep the population of SETI classic folks happy while the most serious issues of BOINC were being dealt with.



I'm going to say something here that may upset a few and it is from the MilkyWay@Home project home page.
This particular project is being developed to better understand the power of volunteer computer resources.
I believe all this controversy over credits and the inefficiency in the stock application is and continues to be intentional. I'm not convinced the project was created to do astronomical science. The papers cited on the home page are all about computer projects, not about galaxy modeling. The project is being run by their computer science department not their astronomy department. I think the thing being studied is called BOINC. I think one of the things being studied is how credit awarding effects the computing power given to a project.




ID: 830920 · Report as offensive
BarryAZ

Send message
Joined: 1 Apr 01
Posts: 2580
Credit: 16,982,517
RAC: 0
United States
Message 830943 - Posted: 15 Nov 2008, 21:52:48 UTC - in response to Message 830920.  

I understand -- just figured that with some folks going on the offensive about another BOINC project, getting folks pristine first might stem the potential for a mob cry for a lynching.


I was around in the Classic days too. They ran out of telescope data because way to many people wanted to crunch. Life. Sometimes you get too much good.

As for data being checked, NTPCKR isn't the real data analysis application, but it will help. I know the data we return is being placed into a database. I am sure that from time to time Dr. K checks the database for exceptionally high results. The question you have is how often. Does it matter if it is once a week or once a decade? Science isn't fast. And even if a exceptional high spot is found, that isn't detection of ET. Next you have to get exclusive telescope time to look at the spot with other methods to see if the signal is still there. Then if it is, you need to have someone else at a different telescope take a look and confirm. Then you can announce ET.



ID: 830943 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 830968 - Posted: 16 Nov 2008, 0:08:47 UTC - in response to Message 830679.  

With all due respect to DA, I believe that although they do use Boinc, project managers should have the right to assign credit as they see fit. Example--Milkyway is owned by RPI (Rennsalaer Polytechnical Institute) not DA. Why should he have the right to control their credit granted?

... and the part that really bothers me about this:

In theory, BOINC is a community -- I'm not talking about the SETI@Home cruncher community, or the users at any other project, but a community of projects that use BOINC.

... and while I know it has been popular in the past to blame the call for credit parity on the evil plans of Dr. Anderson to build a vast dominion to be ruled from his ivory tower at Berkeley.

That doesn't make sense.

It does make sense for the BOINC community, and the BOINC users (us) to ask for credit parity, so that our contributions to various projects can be measured in some reasonable way.

We'll likely never have perfect parity, but that doesn't mean it isn't a worthy goal, or a good idea.

... and if credit is just fiction, then I don't blame users for getting angry.

Me, I just crunch.

-- Ned
ID: 830968 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 830972 - Posted: 16 Nov 2008, 0:23:49 UTC - in response to Message 830894.  

You know, you may be right there -- though I wonder, just how much work with the results that the SETI project has generated over the years is being accomplished these days -- I've heard tales which suggest the results are collecting and not being processed. Heck, in the old pre-BOINC days of SETI classic, as they were going into a many months long wind down mode (I was there crunching) work that had already been sent out, returned *and validated* was being sent out again, as much to keep the population of SETI classic folks happy while the most serious issues of BOINC were being dealt with.

If I remember correctly, SETI@Home Classic didn't have a lot of the accounting and tracking that we have with BOINC.

Among other things, I believe that the Classic screen saver had a voracious appetite, and failing to feed "classic" would lead to a denial-of-service attack from all of the classic participants.

The only reasonable solution was to feed them, even if what was sent was simply recycled.

It seems to follow directly from all of the classic screen-savers saying "feed me!" that BOINC handles running out of work, and actually does not allow work to be re-crunched (at least not easily).

We also need to remember that we're a litmus test -- we're here to filter out all of the uninteresting work units.

The NTPCKR is the next filter in line.

It seems fairly obvious that the NTPCKR isn't going to do much unless there is a whole stack of observations for each point in the sky -- so all of the history that some would call "warehoused" data is the background that will give the NTPCKR something worth processing when it starts.

When the scientists start looking through the output from the NTPCKR, then the actual science will start.

... or, they may find a need for another filter.

... or they may find a new way to crunch the signals we have (again, since that's what AP does).

Do I want all of this to happen more quickly? Sure, but I don't see things happening quickly at the current funding levels.

ID: 830972 · Report as offensive
BarryAZ

Send message
Joined: 1 Apr 01
Posts: 2580
Credit: 16,982,517
RAC: 0
United States
Message 831053 - Posted: 16 Nov 2008, 5:12:40 UTC - in response to Message 830972.  

No real argument here -- my point was rather to calm down headhunters out there. The thing is, MilkyWay has some very real credit scheme issues -- which they have been made aware of, it simply may take some time and effort to deal with that. If you think SETI is resource strapped -- projects like MilkyWay and others are WELL BELOW the resource poverty line.



If I remember correctly, SETI@Home Classic didn't have a lot of the accounting and tracking that we have with BOINC.

Among other things, I believe that the Classic screen saver had a voracious appetite, and failing to feed "classic" would lead to a denial-of-service attack from all of the classic participants.

The only reasonable solution was to feed them, even if what was sent was simply recycled.

It seems to follow directly from all of the classic screen-savers saying "feed me!" that BOINC handles running out of work, and actually does not allow work to be re-crunched (at least not easily).

We also need to remember that we're a litmus test -- we're here to filter out all of the uninteresting work units.

The NTPCKR is the next filter in line.

It seems fairly obvious that the NTPCKR isn't going to do much unless there is a whole stack of observations for each point in the sky -- so all of the history that some would call "warehoused" data is the background that will give the NTPCKR something worth processing when it starts.

When the scientists start looking through the output from the NTPCKR, then the actual science will start.

... or, they may find a need for another filter.

... or they may find a new way to crunch the signals we have (again, since that's what AP does).

Do I want all of this to happen more quickly? Sure, but I don't see things happening quickly at the current funding levels.


ID: 831053 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19091
Credit: 40,757,560
RAC: 67
United Kingdom
Message 831059 - Posted: 16 Nov 2008, 5:36:09 UTC
Last modified: 16 Nov 2008, 5:36:25 UTC

I wonder if it might be worth while splitting the BOINC credit stats into two piles.

The first pile would be for established projects, and the only real pile from which host, user and team positions can be compared.

The second pile would be for test and start-up projects. This would allow application testing and credit level before release before moving or changing to a first pile site. And if the application is open source allow the volunteer programming wizards a chance to review and optimise the code before situations like these arise.
ID: 831059 · Report as offensive
BarryAZ

Send message
Joined: 1 Apr 01
Posts: 2580
Credit: 16,982,517
RAC: 0
United States
Message 831196 - Posted: 16 Nov 2008, 18:38:13 UTC - in response to Message 831059.  

Sort of an 'us and them' approach. Would optimized applications of any projects be similarly separated out?


I wonder if it might be worth while splitting the BOINC credit stats into two piles.

The first pile would be for established projects, and the only real pile from which host, user and team positions can be compared.

The second pile would be for test and start-up projects. This would allow application testing and credit level before release before moving or changing to a first pile site. And if the application is open source allow the volunteer programming wizards a chance to review and optimise the code before situations like these arise.


ID: 831196 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19091
Credit: 40,757,560
RAC: 67
United Kingdom
Message 831350 - Posted: 17 Nov 2008, 1:10:59 UTC - in response to Message 831196.  

Do the few people who use optimised apps on established projects make very much difference to the overall stats?

Compared too the overall benefit that feeds back into the official app. Seti's default enhanced app is probably at least four times faster, and less buggy, thanks to the volunteer optimisers, and Einsteins app's have seen similar benefits.

It surprises me that more projects haven't made their apps open source, when you look at the increased crunching power gained.

For the users who do testing and help new project, we learn to take the rough with the smooth. 40 hr crunching for 130 cr on a developing app that eventually when it goes mainstream, on same computer, is done in under 3 hr for 50 cr, and that was a 'good' app, it didn't crash on 50% of the systems.

Sort of an 'us and them' approach. Would optimized applications of any projects be similarly separated out?


I wonder if it might be worth while splitting the BOINC credit stats into two piles.

The first pile would be for established projects, and the only real pile from which host, user and team positions can be compared.

The second pile would be for test and start-up projects. This would allow application testing and credit level before release before moving or changing to a first pile site. And if the application is open source allow the volunteer programming wizards a chance to review and optimise the code before situations like these arise.


ID: 831350 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 831452 - Posted: 17 Nov 2008, 5:57:27 UTC - in response to Message 831350.  


It surprises me that more projects haven't made their apps open source, when you look at the increased crunching power gained.

There are some projects that don't actually "own" their applications.

Maybe the original developer is gone, and no one knows how to build the app. or they're using a wrapper, and all they have is a "binary" license to the underlying program -- and the "science application" is just a wrapper.

Others may consider the actual algorithms to be proprietary.

There can be legitimate reasons to not release source.
ID: 831452 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19091
Credit: 40,757,560
RAC: 67
United Kingdom
Message 831473 - Posted: 17 Nov 2008, 8:01:02 UTC - in response to Message 831452.  


It surprises me that more projects haven't made their apps open source, when you look at the increased crunching power gained.

There are some projects that don't actually "own" their applications.

Maybe the original developer is gone, and no one knows how to build the app. or they're using a wrapper, and all they have is a "binary" license to the underlying program -- and the "science application" is just a wrapper.

Others may consider the actual algorithms to be proprietary.

There can be legitimate reasons to not release source.

I do realise that. CPND comes to mind, in that the results of their applications have to comply with the standards of all climate models that have been and will be run by many institutions for at least the last 20 years.
ID: 831473 · Report as offensive
Profile BANZAI56
Volunteer tester

Send message
Joined: 17 May 00
Posts: 139
Credit: 47,299,948
RAC: 2
United States
Message 831780 - Posted: 18 Nov 2008, 9:54:03 UTC

LoL!

That's some of the same crowd that got DLB doing stupid things over something that could have been better handled by all involved.

What are those odds...



Ahh, might as well just burn the barn down to get rid of the mice again.
ID: 831780 · Report as offensive
Profile Crunch3r
Volunteer tester
Avatar

Send message
Joined: 15 Apr 99
Posts: 1546
Credit: 3,438,823
RAC: 0
Germany
Message 831821 - Posted: 18 Nov 2008, 14:55:08 UTC - in response to Message 831780.  

LoL!

That's some of the same crowd that got DLB doing stupid things over something that could have been better handled by all involved.

What are those odds...



Ahh, might as well just burn the barn down to get rid of the mice again.


yepp, the same guys, trying to ruin the whole project for all participants cuz admins over there are not acting the way they want them too...


Seems like a bunch of terrorists to me ...






Join BOINC United now!
ID: 831821 · Report as offensive
Profile Dr. C.E.T.I.
Avatar

Send message
Joined: 29 Feb 00
Posts: 16019
Credit: 794,685
RAC: 0
United States
Message 831829 - Posted: 18 Nov 2008, 15:21:25 UTC - in response to Message 831821.  

LoL!

That's some of the same crowd that got DLB doing stupid things over something that could have been better handled by all involved.

What are those odds...



Ahh, might as well just burn the barn down to get rid of the mice again.


yepp, the same guys, trying to ruin the whole project for all participants cuz admins over there are not acting the way they want them too...


Seems like a bunch of terrorists to me ...








BOINC Wiki . . .

Science Status Page . . .
ID: 831829 · Report as offensive
Aurora Borealis
Volunteer tester
Avatar

Send message
Joined: 14 Jan 01
Posts: 3075
Credit: 5,631,463
RAC: 0
Canada
Message 831851 - Posted: 18 Nov 2008, 16:33:12 UTC

O.T.

Well it looks like SZTAKI may at least have gotten part of their act together. The latest WU set appears to be running about 5 to 6 hrs and have proper project completion time. They also increased Deadlines to a month. Mind you the Boinc starting estimates of 1300+ hrs are still scaring people off. Anyone that completed the previous WU now have their DCF so high it will likely keep people from returning until they understand the situation.

The other problem is they seem to have accidentally wiped out their result data files, so all the WU that took 100+ hrs to complete have disappeared with no credits issued.

It's a good thing I have a sense of humor about all this and know how to manually adjust my DCF.

Boinc V7.2.42
Win7 i5 3.33G 4GB, GTX470
ID: 831851 · Report as offensive
Profile Gecko
Volunteer tester
Avatar

Send message
Joined: 17 Nov 99
Posts: 454
Credit: 6,946,910
RAC: 47
United States
Message 831863 - Posted: 18 Nov 2008, 16:56:59 UTC
Last modified: 18 Nov 2008, 17:12:56 UTC

Poor behavior by some members in any mass group is to be expected.
Projects have to effectively manage themselves to minimize the likelihood (& escalation) and also to mitigate potential damage. There are also an unfortunate few that move among projects like a cancer seeding discontent and chaos, ultimately doing more harm than what they "contribute".

It is clear that many "open source" projects don't really "understand" open source and therefore, have NO plan & policy of how (& whether) it really fits their project.
On more than a few occasions it's been necessary for volunteer developers to educate project admins on the caveats, requirements etc. In many cases, this was done "after" the horse left the barn, was running free in the wild & the project admin was unprepared for (or underestimated) potential consequences and political considerations. General ignorance combined w/ the irrational & irresponsible behaviors of some project admins deserve at least equal billing for escalation @ projects as the larger user base.

Which brings me to my point: Projects that don't understand open source, shouldn't "be" open source until they do their diligence. Vol developers should not be held accountable because of the lack of understanding of project admins as long as the vol developers are adhering to all open source requirements & clearly defined project/scientific integrity standards.

There is no question that the ultimate responsibility lies w/ the project admin(s) since they determine whether the project is open or closed, the source used, and in most cases the actual source code.
ID: 831863 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 832474 - Posted: 20 Nov 2008, 17:28:45 UTC - in response to Message 830660.  

This is truly something I thought I'd never see.

From that thread:
"
AND the optimised Apps are "officially" used by the Project.
"
Does it means that till now they refuse to accept work done by optimised app even if it passes validation?
ID: 832474 · Report as offensive
Previous · 1 · 2

Message boards : Number crunching : Boycotting a project for TOO MUCH CREDIT???


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.