. . . the Politics of Rights regarding Participation

Message boards : SETI@home Science : . . . the Politics of Rights regarding Participation
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 3 · 4 · 5 · 6

AuthorMessage
Profile Dr. C.E.T.I.
Avatar

Send message
Joined: 29 Feb 00
Posts: 16019
Credit: 794,685
RAC: 0
United States
Message 813800 - Posted: 1 Oct 2008, 22:55:18 UTC



. . . AMD Stream Computing Overview - pdf File


. . . from boinc_projects

-------- Original Message --------

Subject: Working with SETI@home development team to create a ATI GPU (graphics processor unit) accelerated version of your client (plus any other BOINC projects :-)

Date: Wed, 1 Oct 2008 16:48:34 -0400
From: Dodd, Andrew

...

ATI (now AMD)

is now in a great position to work will a number of distributed
computing projects that could benefit greatly from GPU acceleration.

We now have an SDK that will significantly help anyone trying to use the
GPU for stream calculation.

I've attached a link to the AMD Stream website (which includes a link to
the SDK), as well the AMD Stream user guide to give a quick over-view of
things.

GPU Technology for Accelerated High Performance Computing

We'd love to work with SETI@home or any other project that uses BOINC
(if you could forward this to boinc_projects@ssl.berkeley.edu, it would
be greatly appreciated as well).

Thanks very much for your time,

Andrew

[/quote]


BOINC Wiki . . .

Science Status Page . . .
ID: 813800 · Report as offensive
Profile Dr. C.E.T.I.
Avatar

Send message
Joined: 29 Feb 00
Posts: 16019
Credit: 794,685
RAC: 0
United States
Message 819459 - Posted: 16 Oct 2008, 22:47:14 UTC



. . . date: Thu, Oct 16, 2008 at 8:37 AM


'The Quake-Catcher Network is a collaborative initiative

for developing the world's largest, low-cost

strong-motion seismic network by utilizing sensors

in and attached to internet-connected computers. The

picture above maps the major earthquakes recently

detected'

subject: [boinc_projects] CFP on Desktop Grids and Volunteer Computing Systems

CALL FOR PAPERS

Third Workshop on Desktop Grids and Volunteer Computing Systems (PCGrid 2009)
held in conjunction with the
IEEE International Parallel & Distributed Processing Symposium (IPDPS)
May 29, 2009
Submission deadline: November 14, 2008
Rome, Italy
web site: 3rd Workshop on Desktop Grids and Volunteer Computing Systems (PCGrid 2009)

See below for details

---------------------------------------------------------------------------------------------------------------------

Journal of Grid Computing
Special issue on desktop grids and volunteer computing
(Independent of PCGrid workshop)
Submission deadline: January 31, 2009
web site: Announcement - Call for papers: Journal of Grid Computing

######################################################################

Third Workshop on Desktop Grids and Volunteer Computing Systems (PCGrid 2009)
held in conjunction with the
IEEE International Parallel & Distributed Processing Symposium (IPDPS)
May 29, 2009
Submission deadline: November 14, 2008
Rome, Italy
web site: 3rd Workshop on Desktop Grids and Volunteer Computing Systems (PCGrid 2009)

Keynote speaker
Prof. Jon Weissman, University of Minnesota, USA

Desktop grids and volunteer computing systems (DGVCS's)
utilize the free resources available in Intranet or Internet
environments for supporting large-scale computation and
storage. For over a decade, DGVCS's have been one of the
largest and most powerful distributed computing systems in
the world, offering a high return on investment for
applications from a wide range of scientific domains
(including computational biology, climate prediction, and
high-energy physics). While DGVCS's sustain up to PetaFLOPS
of computing power from hundreds of thousands to millions of
resources, fully leveraging the platform's computational
power is still a major challenge because of the immense
scale, high volatility, and extreme heterogeneity of such
systems.

The purpose of the workshop is to provide a forum for
discussing recent advances and identifying open issues for
the development of scalable, fault-tolerant, and secure
DGVCS's. The workshop seeks to bring desktop grid
researchers together from theoretical, system, and
application areas to identify plausible approaches for
supporting applications with a range of complexity and
requirements on desktop environments. Last year's workshop
was a great success

(see the past program here:

2nd Workshop on Desktop Grids and Volunteer Computing Systems (PCGrid 2008).

We invite submissions on DGVCS topics including the
following:

- cloud computing over unreliable enterprise or Internet resources
- DGVCS middleware and software infrastructure (including
management), with emphasis on virtual machines
- incorporation of DGVCS's with Grid infrastructures
- DGVCS programming environments and models
- modeling, simulation, and emulation of large-scale, volatile
environments
- resource management and scheduling
- resource measurement and characterization
- novel DGVCS applications
- data management (strategies, protocols, storage)
- security on DGVCS's (reputation systems, result verification)
- fault-tolerance on shared, volatile resources
- peer-to-peer (P2P) algorithms or systems applied to DGVCS's

With regard to the last topic, we strongly encourage authors
of P2P-related paper submissions to emphasize the
applicability to DGVCS's in order to be within the scope of
the workshop.

The workshop proceedings will be published through the IEEE
Computer Society Press as part of the IPDPS CD-ROM.

######################################################################
IMPORTANT DATES

Manuscript submission deadline: November 14, 2008
Acceptance Notification: January 16, 2009
Camera-ready paper deadline: February 15, 2009
Workshop: May 29, 2009

######################################################################
SUBMISSIONS

Manuscripts will be evaluated based on their originality,
technical strength, quality of presentation, and relevance
to the workshop scope. Only manuscripts that have neither
appeared nor been submitted previously for publication are
allowed.

Authors are invited to submit a manuscript of up to 8 pages
in IEEE format (10pt font, two-columns, single-spaced). The
procedure for electronic submissions will be posted at:

3rd Workshop on Desktop Grids and Volunteer Computing Systems (PCGrid 2009)

#####################################################################
ORGANIZATION

General Chairs

Franck Cappello, INRIA, France
Derrick Kondo, INRIA, France

Program Chair

Gilles Fedak, INRIA, France

Program Committee

David Anderson, University of California at Berkeley, USA
Artur Andrzejak, Zuse Institute of Berlin, Germany
Filipe Araujo, University of Coimbra, Portugal
Henri Bal, Vrije Universiteit, The Netherlands
Zoltan Balaton, SZTAKI, Hungary
Massimo Canonico, University of Piemonte Orientale, Italy
Henri Casanova, University of Hawaii at Manoa, USA
Abhishek Chandra, University of Minnesota, USA
Frederic Desprez, INRIA, France
Rudolf Eigenmann, Purdue University, USA
Renato Figueiredo, University of Florida, USA
Fabrice Huet, University of Nice Sophia Antipolis, France
Yang-Suk Kee, University of Southern California, USA
Tevfik Kosar, Louisiana State University, USA
Arnaud Legrand, CNRS, France
Virginia Lo, University of Oregon, USA
Grzegorz Malewicz, Google Inc., USA
Kevin Reed, World Community Grid, USA
Olivier Richard, ID-IMAG, France
Mitsuhisa Sato, University of Tsukuba, Japan
Luis M. Silva, University of Coimbra, Portugal
Jaspal Subhlok, University of Houston, USA
Alan Sussman, University of Maryland, USA
Michela Taufer, University of Delaware, USA
Douglas Thain, University of Notre Dame, USA
Bernard Traversat, SUN, USA
Carlos Varela, Rensselaer Polytechnic Institute, USA
Jon Weissman, University of Minnesota, USA



BOINC Wiki . . .

Science Status Page . . .
ID: 819459 · Report as offensive
Profile Dr. C.E.T.I.
Avatar

Send message
Joined: 29 Feb 00
Posts: 16019
Credit: 794,685
RAC: 0
United States
Message 819966 - Posted: 18 Oct 2008, 0:54:51 UTC
Last modified: 18 Oct 2008, 1:41:08 UTC

.

. . . "To: David Anderson", "From: John 37309":

subject: Re: [boinc_projects] Creating a unified marketing effort for all projects


This discussion is continueing;

Message boards : Promotion : Creating a unified marketing effort. for whatever final "brand" BOINC unifies under



Even thought there has been a large increase in the number of new BOINC
projects, BOINC software and the projects themselves are actually declining
in numbers in proportion to general Internet growth. There is a thinning
effect happening. I really do hate to see this happen when with a small
change in direction, BOINC projects could be growing at a considerably
faster rate.

I believe that some kind of collaborative group or organisation needs to be
set up where project administrators and scientists can share resources and
information and possibly even funding for promoting and marketing their
projects on a semi professional level.

Someone must start this initiative to market and sell all the projects as
one single combined entity. If it does not happen and things keep going the
way it is now, the decline will continue and BOINC projects will always be a
minority pastime that only hard-core computer geeks get involved in.

I believe that the only person who can start this new combined marketing and
promotion initiative would have to be someone from the BOINC project
management or the principle investigator or project admin from one of the
top 5 or 6 projects. One person must take the initiative and lead the way.

If BOINC projects are not sold and marketed to the general public as one
combined unit, the current thinning will become much worse.

Marketing and selling the concept of BOINC science projects will work best
if carried out through one promotional website where ALL projects can
contribute to the effort. One single United effort to combine resources into
one website. The BOINC social network, the science social network, the BOINC
science education network, the BOINC science news network, all with relevant
information presented in a way that people enjoy reading and browsing. It
could be all one network!....... Not people working on their own to make
things fit into someone else's social network.

Unite, or BOINC projects will continue to be a minority thing!


BOINC Wiki . . .

Science Status Page . . .
ID: 819966 · Report as offensive
Profile Dr. C.E.T.I.
Avatar

Send message
Joined: 29 Feb 00
Posts: 16019
Credit: 794,685
RAC: 0
United States
Message 825016 - Posted: 30 Oct 2008, 21:40:09 UTC



. . . Subject: Re: BOINC_Projects LIST - from Eric J Korpela

[boinc_projects] 60 days of credit multiplier history....




A while back I said I would post how things were going with the new
credit multiplier code. I've attached a plot showing our multipliers
have drifted over the last 61 days. Astropulse is the red line,
SETI@home is the green. Astropulse has pretty well stablilized at
1.05. SETI@home is more variable because there is a significant
dependence on the properties of the workunits, but it, too is fairly
stable.

I don't have any data to show for prior to 61 days, because in prior
code there was a bug that caused the mutliplier to overshoot its
target. Apart from the initial release there hasn't been much
attention paid to the day-to-day credit changes by SETI@home
participants.

The median of credited ops per CPU second is now pretty
close to the benchmark values



So the mechanism appears to be stable and functioning if you'd like to use it.



On a totally unrelated note, the number of CPUs per active host on
SETI@home will hit 2 in the next few weeks.



The median has been at 2 since January.



credit_multiplier




BOINC Wiki . . .

Science Status Page . . .
ID: 825016 · Report as offensive
Profile Dr. C.E.T.I.
Avatar

Send message
Joined: 29 Feb 00
Posts: 16019
Credit: 794,685
RAC: 0
United States
Message 830653 - Posted: 15 Nov 2008, 2:41:54 UTC



Berkeley NEWS: November 14, 2008



The CASPER/SETI@home/Astropulse teams wrote a joint paper about New SETI Sky Surveys for Radio Pulses.

Warning: highly technical (to be published in Acta Astronautica).



. . . see here: New SETI Sky Surveys for Radio Pulses ---> pdf File


BOINC Wiki . . .

Science Status Page . . .
ID: 830653 · Report as offensive
Profile Dr. C.E.T.I.
Avatar

Send message
Joined: 29 Feb 00
Posts: 16019
Credit: 794,685
RAC: 0
United States
Message 844221 - Posted: 23 Dec 2008, 15:38:26 UTC




BERKELEY NEWS: December 18, 2008





A version of SETI@home that runs on NVIDIA graphics boards using their CUDA computing engine has been released.

The CUDA version runs up to 10X faster than the CPU version.


NVIDIA has put out a press release about the SETI@home CUDA client and about GPU Grid.


See directions for getting started <---- click me


. . . see the SETI@home CUDA FAQ <---- click me


. . . there's also a Question & Answer Forum here: CUDA - Installing and running CUDA applications <---- click me


BOINC Wiki . . .

Science Status Page . . .
ID: 844221 · Report as offensive
Profile Dr. C.E.T.I.
Avatar

Send message
Joined: 29 Feb 00
Posts: 16019
Credit: 794,685
RAC: 0
United States
Message 844645 - Posted: 24 Dec 2008, 17:58:02 UTC



. . . as an Addl. Note regarding the Previous Post by me here - this New one is from Paul D. Buck: Message boards : Number crunching : CUDA and Resource Share

well worth the read imho . . .

Thanks Paul

> Have a Wonderful Holiday and A Happy & Most Prosperous New Year as Well . . .


BOINC Wiki . . .

Science Status Page . . .
ID: 844645 · Report as offensive
Profile Dr. C.E.T.I.
Avatar

Send message
Joined: 29 Feb 00
Posts: 16019
Credit: 794,685
RAC: 0
United States
Message 846125 - Posted: 28 Dec 2008, 20:32:43 UTC



. . . from Paul D Buck

Date: Sat, 27 Dec 2008 23:50:51 -0800
From: "Paul D. Buck" <snip>
Subject: Re: [boinc_dev] New work fetch policy design

On Dec 27, 2008, at 11:25 AM, Nicol?s Alvarez wrote:

>> David is looking into hacking something together for the 6.6 client
>> and
>> then fully fixing the issue in 6.8. We are looking for a January
>> release
>> for 6.6 if everything goes well.
>
> As long as that "hacking together" doesn't involve protocol changes
> that may
> cause problems in the future...

Getting back on topic... :)



I think we need to extend the "tagging" of work to allow multiple
classifications. At the moment, as I understand it we tag CPU, GPU,
Non-CPU as our primary classifications. The suggestion is to add
additional classes like Network and to allow for future extensions.

But we do have a confusion as to PROJECT classes and TASK classes.
SaH Has basically two classes of work CPU Intense and GPU Intense
tasks ... GPU Grid as a converse has PGU Intense and FreeHAL is Non-
CPU intense (I can argue that Almere Grid and Almere Test Grid should
be sending its work as non-CPU Intense, but that is a side issue).
Other Non-CPU Intense projects included DepSpdr and XtremLab and a
couple others I think. Most all other active projects (unless I have
forgotten or do not know of one) are right now only sending out CPU
Intense tasks.

My first point is that the BOINC Manager except in rare cases will
only run one Non-CPU intense task at the same time. It will run one
in parallel with a GPU Intense task as an exception. However, I think
that this should be relaxed to allow one per core, subject to
participant control.

So, hypothetical project mix (and assuming Almere set the task type
correctly), on my 8 core machine I should be able to run 8 CPU intense
tasks and 8 Non-Intense tasks. Assuming that I have only one GPU,
that would allow me to run one GPU intense task and 7 tasks
distributed between Almere Grid and FreeHAL assuming no other limits
being placed on these projects. With a project preference I could
set, for example, FreeHAL to only run one per computer to prevent
network swamping, which would mean that I could be running 8 CPU
Tasks, 1 GPU task, 1 FreeHAL, and 6 Almere Grid tasks, total of 16 ...

Were I to add GPUs, the number of other tasks would then drop
accordingly.

Second conceptual extension is that projects like FreeHAL and DepSPDR
should actually be tagged as Network and Non-CPU intense with limiting
controls being driven by both Participant set network usage limits and
per computer core count usage. With obviously the most restrictive
limit being controlling. I could say that all cores could be used by
these two projects, but then set a bandwidth limit on usage thus
limiting the actual practical number running (over time, several may
run at the same time until network limits have been reache at which
time the tasks would hibernate until network usage could resume, one
would expect "learning" so that as time wends on the BOINC Manager
would run fewer tasks at the same time and spread out the network
usage to avoid bottlenecking).

Obviously, Non-CPU Intense and CPU Intense tags would not be allowed.
How to prevent projects from assigning this is beyond my scope ... :)

With multi-tagging of tasks, we leave open the possibilities that a
task may, in fact, be both CPU Intense and GPU Intense. I do not know
how or why this would be done, but we should leave ourselves open to
the possibility...

My thinking here is still murky, and forgive the length of the posts,
but, it helps me think to write it down ...

But, with multi-tagging, assuming always that the projects do it
correctly, we *MAY* be able to salvage the tatters of the simple
Resource Share model. But I am still struggling ... here is what I
see as the problem, we have non-uniform resources which we want to
share. I may want to allow SaH to have a low share of my CPU, but a
higher share of my GPU.

Example, I have a 5 tier model for work allocation on my computers.
Projects of emphasis like a Project of the Month or a project I am
trying to drive my total CS number up to some goal are Tier 1 with
Resource Share of 100, Tier 2 have share of 50, Tier 3 - 25, tier 4 -
10, and tier 5 have 5 as their resource share.

TIer 1 has LHC and some emphasis projects (may be up to 4) and these
run on all possible computers.
Tier 2 has WCG, Malaria, CPDN, EaH, Spinhenge, Cosmology, and Milky Way
Tier 3 has Genetic LIfe, ABC, Rosetta etc ...
Tier 4 has SaH, Sztaki, etc ...
Tier 5 Has most of the Alpha State projects

The essence here is that I want to reward and participate most heavily
in those projects that are in production status and slightly less
emphasis on Beta status projects, and even less on Alpha status
projects. Production projects are in tiers 2 - 4, and the others in
tier 3 to 5; again the emphasis is to reward projects that are
"stable" and in a production status while still helping those emerging
projects.

But, Alemere grid, FreeHAL, GPU Grid do not map well into this kind of
structure because they either have essentially zero impact on CPU use
(Again Alemere Grid is improperly set) or they have no competitor for
the resource GPU Grid (well, SaH, assuming I would want to move SaH
from tier 4). I know that this is somewhat of an artificial moment,
but, how do we really handle the competing tensions between the rule
of don't allow idle resources and my desire not to over-perform for
some projects?

Well, maybe some one else has some thoughts ...



see: dev_forum for access / reply's . . .


BOINC Wiki . . .

Science Status Page . . .
ID: 846125 · Report as offensive
Profile Dr. C.E.T.I.
Avatar

Send message
Joined: 29 Feb 00
Posts: 16019
Credit: 794,685
RAC: 0
United States
Message 850795 - Posted: 8 Jan 2009, 11:06:26 UTC



. . . from Paul D. Buck [boinc_dev]

Date: Wed, 7 Jan 2009 22:41:27 -0800
From: "Paul D. Buck" <snip>
Subject: Re: [boinc_dev] Cross project login
To: Rom Walton



[quote]
On Jan 7, 2009, at 4:16 PM, Rom Walton wrote:

> Application platforms are exported by the projects in the
> get_project_config.php RPC.
>
> I don't think the BOINC Manager wizards currently use that
> information, but the account managers could.
>
> As far as the BOINC complexity issue goes.
>
> I see two sides of this equation being argued by the same people at
> the same time. BOINC should have more bells and whistles that let
> you fine tune the client CPU scheduler (more complex), BOINC should
> be simple to use so that it just works (less complex, reduce the
> number of options, keep the CPUs and GPUs busy).




The reality is that the simpler something is to use, the more that
additional engineering is required and in many if not most cases the
more complex that something becomes ...

My long time historical stance is that one of the original design
goals was to enlist the general population's idle computer time. Yet
the system as presented to the participant is extrodinarily complex
and the amount of guidance information is of limited value, scattered,
and tends to be written by engineers and scientists for eggheads that
already know the answer. When I tried to write information in a way
that was accessible to the ordinary individual I was, well, eventually
marganalized until my effort became irrelevant.

We now have the happy situation where our information, such as is
available is written for doctoral candidates, is more boring than
stereo instructions, is usually out of date, not well maintained, and
is scatter across a half a dozen sites ...

Project specific information is usually even worse (CPDN is a
relatively rare exception in its attempt to explain what it is
doing) ...




> Although there is a third group that believes that something gets
> simpler to use with the more controls you add to it.
>
> Meanwhile the only real way to know what is really going on is to
> build some kind of telemetry system into BOINC that would show how
> often somebody tweaked project status, resource shares, network
> activity settings, CPU activity settings, and compare that to the
> desired preferences. By in large the majority of people using BOINC
> is silent.
>
> Basically we are reliving the KDE vs. GNOME debate with neither side
> having concrete evidence about what is working and what isn't
> working relative to the silent majority.




And the large part of that silent majority leaves very early after
adopting BOINC and we also don't really know why that is either ...
you can tell that just by joining any project and running it for a
week ... you will have done more work than 50% of the participants (or
more) in almost every case ...

Someone famous once made a comment about hanging together or
separately ... well, we are hanging separately ...

The problem with asking questions or for input is that someone may
take you up on your challenge and provide answers or input ... and you
have to be ready for input that may not be what you expect or want. I
have two current cases here, Dr. Anderson's call for comments on his
draft and Willy's on his AM ... to this point, the answer is a very
loud and deafening lack of interest in the response ... why, we don't
know, both are silent. Is is because it is bad input? I don't know,
both are silent ...

But what is not silent is the apparent contempt for people whose only
desire is to improve the system...

And, I think, this is one of the reasons that many are silent ... some
send me private e-mail telling me to keep trying ... they agree with
me ... but I know why they are silent ... look at how much of what I
have to say is treated ... and no, I am not talking about the dissent
and discussions ... I am talking about the verbal abuses and the worse
silences ...

And a long time ago I suggested a pop-up asking why you were detaching
from a project so that we could start to collect data ... ignored ...

I suggested a link to documentation ... well, years later we got
one ... to documentation that can fit on one page to describe all of
BOINC Manager ... pathetic ... compare and contrast the pages I wrote
about each and every tab in the UBW and the trivia that explains
nothing in the "official" documentation ... mine was deemed bad
because I did not write it like an encyclopeida entry ... and shudder,
shudder, horror of horrors I mentioned projects ... which is the real
point of BOINC ...

Actually, I don't think many of the people are all that silent ... but
you have to be prepared to listen ... and then to act ...

But those that are in positions to act, such as even yourself ... are
not prepared to listen ... and if you do ... you don't hear ...

One of my favorite sayings sums it up nicely:

I know you believe you understand what you think I said. But I am not
sure you realise that what you heard is not what I meant.


People are speaking, but you have to be open to what they mean when
they speak and not what you want them to be saying ...

When Dr. Anderson made the call for input ... he got it ... his
response, silence and by the evidence, is ignoring all of the
input ... maybe I am wrong, but history says I am not ... Willy asked
a question ... and to this point is silent? Is he still on vacation?
Not reading his e-mail? Dead? I don't know ... but the silence is
deafening ... Matthew responded and the answer is don't hold your
breath ... *MAYBE* later this year ... then again ... maybe not ...

And that is the answer ... why speak up ... why waste your time ... it
matters not ... it will make no difference ... all that is
accomplished by speaking out is that I have wasted my time trying ...
it cost me nothing to stay silent ... perhaps the silent ones are the
wises of us all ...




BOINC Wiki . . .

Science Status Page . . .
ID: 850795 · Report as offensive
Profile RandyC
Avatar

Send message
Joined: 20 Oct 99
Posts: 714
Credit: 1,704,345
RAC: 0
United States
Message 850882 - Posted: 8 Jan 2009, 17:12:48 UTC - in response to Message 850795.  



. . . from Paul D. Buck [boinc_dev]

Date: Wed, 7 Jan 2009 22:41:27 -0800
From: "Paul D. Buck" <snip>
Subject: Re: [boinc_dev] Cross project login
To: Rom Walton

<snip>

Someone famous once made a comment about hanging together or
separately ... well, we are hanging separately ...

The problem with asking questions or for input is that someone may
take you up on your challenge and provide answers or input ... and you
have to be ready for input that may not be what you expect or want. I
have two current cases here, Dr. Anderson's call for comments on his
draft and Willy's on his AM ... to this point, the answer is a very
loud and deafening lack of interest in the response ... why, we don't
know, both are silent. Is is because it is bad input? I don't know,
both are silent ...

But what is not silent is the apparent contempt for people whose only
desire is to improve the system...

<snip>


Perhaps it is about time for some serious soul-searching among the development community. When leadership (i.e. Dr. A. et al) is no longer responsive to the needs/desires of the community, it is no longer leadership, but an open invitation for revolution. I don't know the feasibility of the following, but:

Maybe it's time to port the whole shebang over to someplace like Sourceforge.net and enable those who truly want a workable product make the updates they feel are needed.

I do understand that it's not possible to add every bell and whistle someone calls for into BOINC, but it seems that a lot of nasty bugs are being allowed to fester in favor of going off on some tanget.

One major undesirable result of the above occurring might be that various people currently employed working at BOINC could lose their income...not a good thing at all. Another point to consider would be that such an undertaking would be strictly by volunteers and thus no guarantee of long term support.

If the BOINC project as it now stands is unresponsive to the BOINC community as a whole, they need to reexamine their priorities lest the community decides to move forward on its own without them.
ID: 850882 · Report as offensive
Profile Paul D. Buck
Volunteer tester

Send message
Joined: 19 Jul 00
Posts: 3898
Credit: 1,158,042
RAC: 0
United States
Message 850888 - Posted: 8 Jan 2009, 17:37:31 UTC - in response to Message 850882.  
Last modified: 8 Jan 2009, 17:41:14 UTC

Perhaps it is about time for some serious soul-searching among the development community. When leadership (i.e. Dr. A. et al) is no longer responsive to the needs/desires of the community, it is no longer leadership, but an open invitation for revolution. I don't know the feasibility of the following, but:

Maybe it's time to port the whole shebang over to someplace like Sourceforge.net and enable those who truly want a workable product make the updates they feel are needed.

I do understand that it's not possible to add every bell and whistle someone calls for into BOINC, but it seems that a lot of nasty bugs are being allowed to fester in favor of going off on some tanget.

One major undesirable result of the above occurring might be that various people currently employed working at BOINC could lose their income...not a good thing at all. Another point to consider would be that such an undertaking would be strictly by volunteers and thus no guarantee of long term support.

If the BOINC project as it now stands is unresponsive to the BOINC community as a whole, they need to reexamine their priorities lest the community decides to move forward on its own without them.



There is a sub-project out there ... sadly I lost it almost as soon as I saw it .. but, they did take a fork of the source code and are, or were, developing on it... I do not know how active they are or what they are doing ...

We have no guarantee of support now ...

I have been feeling for some time now that one of the best things that could happen to BOINC is that it could lose all funding from external sources. Then, and likely only then, will the projects see that the only choice that they have is to come out of their ivory towers and actually do some things to save their projects ...

I doubt it though... for a group of supposedly smart people they seem remarkably dense ...

{edit} Here it is, Synecdoche for those with ability and health ... please visit and support ...{/edit}
ID: 850888 · Report as offensive
Profile Paul D. Buck
Volunteer tester

Send message
Joined: 19 Jul 00
Posts: 3898
Credit: 1,158,042
RAC: 0
United States
Message 853544 - Posted: 15 Jan 2009, 0:24:29 UTC

I was just mousing about and stumbled across this very nice summarization of the "Credit problem" by Dr. Eric Korpela.

An interesting read ...

And yet, when I was done I could not help but think that he missed a couple points.

1. Deflationary schemes - he and the developers seem heck-bent on developing deflationary schemes whenever they look at the credit issue. The original design was to tie to an "ideal machine" and calculate work as compared to that machine. As we know the mechanism do do that is fatally flawed. We also know that, as he rightly pointed out, improvements in efficiency currently lead (wrongly) to less payment. Yet his proposals all lead to the same type of deflationary system.

2. All developer led proposals inevitably start from the proposition that some are being paid too much and therefore credit adjustments must always be downwards to some "ideal". Why aren't any proposals to find the top, and adjust *UP*?

3. There are more issues with credit across projects including what to do about tasks that fail. In the early days the idea was that there would be no payment regardless of cause. Well, CPDN ignored that because they recognized that their models have a high failure rate and that many models would crash through no fault of the participant. Most other projects the penalty of an occasional dead task was just a cost of doing business. Yet, more and more projects are running longer and longer tasks and the stability of the models are not that hot.

4. A comment by someone else said that the possibility of cross-project parity was impossible. Well, at the moment it is difficult to believe that it is possible. But, if you look at the history, there has been virtually no effort to achieve parity, other than by fiat and edict. And I will submit that the cost to the projects has been high in that this is still one of the most contentious issues because of its importance to so many people. Eric rightly noted that the economy is we do tasks and get paid credits. I don't select projects based solely on the payment rate, some do, but I think most don't ... Ramsey@Home is planning to go credit free and hopefully he will get enough pure science types to contribute that he can get the work done. But, I won't be doing any, because, well, I won't get paid at all ...

The idea of using a standardized machine to measure production is a good one. The only problem is that Eric's proposal of using the standardized machine was that as time went on he wanted the definition of what constitues a standard machine to change. The point of the Cobblestone was that it would be fixed. We pick a machine today that we call "average", pick a target earning (and I would suggest to avoid issue it be to adjust up ... and run projects on it to establish the adjustments. Make those adjustments ... then we either keep the machine in perpetuity working on calculating what the credit issue rate should be ... or ... we just calculate rates from extension ...

What the heck, it has been a year since he wrote that ... and we have had no progress ... time to start the conversation over again ... this is still probably the most talked about issue on the minds of participants ...
ID: 853544 · Report as offensive
Profile Dr. C.E.T.I.
Avatar

Send message
Joined: 29 Feb 00
Posts: 16019
Credit: 794,685
RAC: 0
United States
Message 866399 - Posted: 17 Feb 2009, 6:24:49 UTC


from: Paul D. Buck
to: Eric J Korpela

cc: Boinc Projects

date: Tue, Feb 17, 2009 at 12:30 AM
subject: Re: [boinc_projects] credit for GPU applications



On Feb 16, 2009, at 3:30 PM, Eric J Korpela wrote:

> On Mon, Feb 16, 2009 at 9:56 AM, Paul D. Buck
> wrote:
>> The only thing that *I* think has gotten out of hand is the loony
>> notion that as computers get faster they should be paid less on a per
>> second basis than what slower computers in the past were paid.
>
> I don't think that anyone is pushing that notion. The slowest machine
> I have running SETI@home earns credit at about the same rate it did
> several years ago.


If you have been lowering credit as you claim, this does not compute
with the assertions that reducing the credit claimed per unit of
work. Now I know that some of the applications have been revised and
I forget exactly when the last doubling of effort was made (where the
run times were doubled because the sensitivity was increased), but I
would suggest that they occured about the same time. So, the
applications became more sensitive increasing the amount of work done
(more FLOPS), but the time to perform them decreased due to improved
compilers, vectorization, etc. Meaning to my mind, you do more FLOPS,
your old machine should be earning more, not less ...


> The problem is not that you are being underpaid now, it's that you
> were overpaid in the past. BOINC credit is supposed to be based on
> FLOPs, but it's not. For SETI@home, 5 years ago people were getting
> credited with 18 FLOPs for every FLOP they provided. Now they are
> only getting credited with 2.4 FLOPs for every FLOP they provide
> (assuming add=multiply=divide=sin()). On some other projects people
> are probably still getting credited with 20 FLOPs for every one they
> provide. Which project is granting the right amount of credit?


I don't know of any project that is awarding the "right" amount. But,
here you validate my thesis. In that the developers created a system
that incorrectly paid us in the past as you claim, then the "ideal"
way to "fix" things is to decrease the award. Going against human
nature you are. Which is why you get the complaints. If we were
"overpaid" in the past, well, live with it ... how would you like it
if UCB came in and pointed out that you have been over paid and you
need a cut in pay? From what you just wrote, I bet you would be
ecstatic!

In that it was years ago I cannot recall all the details, but, perhaps
one of the other lurkers can refresh us ... but my memory says that
back in BOINC Beta we actually did count the FLOPS of several tasks
and found that we did have reasonably accurate numbers. IF that is
true and you have cut the awards since then, well, you have, as you
stated in your article, deflated the awards. In other words, you are
now paying less for the same work of the past.

Which is even more puzzling to me in that here you indicate
differently...


> And yes, unless projects actually come up with a conversion from
> credit to FLOPs, stats sites should drop the bogus FLOP/s claims for
> the projects and for BOINC as a whole. The problem is that I only
> know of one project that has actually calculated the number of
> floating point operations that it does. Most of the rest just assume
> that it's equal to the bench mark values times the run time. Since
> benchmark values are the only measure we have, it's the only available
> measure for cross project comparison.


Actually it isn't ... we have had several, but, as always, participant
suggestions are not really welcome in this world, so these other
measurements are not considered. There is a cross-project CS per
second table for one. I cannot vouch for its accuracy, but, assuming
it is accurate it provides a clear demonstration of the problem and
differences between the projects.

The second unit, which I suggested back in beta is the same that you
have touted. The ye olde SaH task. Using the actual work of the
projects to measure capability was suggested several times over the
history of BOINC. And, as always, those of us that have suggested
this as a meter-stick have been patted on the head and told to go
away ... If you go look in the UBW I suggested a rather involved
process that would have performed that function. The perhaps most
useful part of that proposal is that I also summarized the efforts and
tests that have been made since before BOINC Beta ended ... and, as I
noted, all ignored ... and that is the saddest thing of all ... this
could have been fixed before it BECAME a problem.

Because, sir, you all forget one of the prime lessons out of SETI@Home
Classic. And that lesson is that the participants want a fair and
equitable measurement of their effort. Like it or not, all the
disclaimers to the contrary ... if you did away with the credit
system ... you would do away with most of the participation. I at
least am honest about it ... though I do not live or die by the CS, if
I cannot measure my contributions, why bother?


>> Just as silly is this notion that MW overpays and there should be
>> pressure to lower the awards. Where is the pressure to force
>> projects
>> that under-award to increase their payment?
>
> Name a project that under awards. If they exist, why not ask them to
> raise their awards? There's no reason they shouldn't. If they used
> the credit multiplier method, it would happen automatically.


Name me a project that REALLY and TRULY listens to its participants.
I will tell you truly that I cannot think of one that really listens
to the participants. Who is clamoring for MW to reduce its awards?
It isn't the participant community. If Travis was listening to the
participant community he would not be mucking about with the awards.
We liked them where they were thank you very much.

And I can tell you from my very own personal history that even
participants that have the energy, talent, ability, time, and will to
do many things their efforts are usually only grudgingly accepted.
More commonly they are not even acknowledged. However, this point is
somewhat tangental to the issue at hand but it is illustrative of the
"normal" processes. The participants ask for x and they get 6 ... we
ask for fair and understandable credit awards and we get deflation and
excuses ...

I ask again, in what universe does it improve motivation to cut pay
rather than raise it?


> Regarding Milkyway, there is little doubt that they grant more credit
> per CPU second than other projects and I would be astonished if they
> actually did more work per cobblestone granted than other projects.
> Does that mean they are granting too much credit, are other projects
> granting too little, or are credits meaningless? Should I feel
> justified in coming in tomorrow and bumping the credits SETI@home
> grants by a factor of 7.4, since that's about the factor by which
> SETI@home has been optimized since it switched to BOINC?


Why not? Back in the day when I was documenting and dealing with
other participants on a daily basis, one of the most common questions
was why when I claim x do I get awarded y? And none liked being told
that being awarded 20 CS when the claim was 30 was sensible, of fair.
Think about it.

What is most astonishing to me is that in that a Cobblestone is a
mythical measurement, long since removed from the original standard CS
machine, why are you folks so stingy? Does it really feel like it is
coming out of your pay?




BOINC Wiki . . .

Science Status Page . . .
ID: 866399 · Report as offensive
Profile Dr. C.E.T.I.
Avatar

Send message
Joined: 29 Feb 00
Posts: 16019
Credit: 794,685
RAC: 0
United States
Message 879686 - Posted: 27 Mar 2009, 5:31:00 UTC




~ Jodrell Bank in Cheshire is home to the Lovell Telescope ~


. . . Radio astronomy gets grant boost - Jodrell Bank Centre for Astrophysics




Scientists from the University of Manchester are to benefit from a ten million euro grant designed to support radio astronomy across Europe.

The university's Jodrell Bank Centre for Astrophysics co-ordinates RadioNet - a network of the major radio astronomy observatories across Europe.

The money will support research into multi-pixel radio cameras and analysing signals received by radio telescopes.


The project will also organise workshops and schools for students.


"Over the past five years, RadioNet has transformed radio astronomy in Europe," said Professor Phil Diamond, director of Jodrell Bank Centre for Astrophysics.


"It is now natural for radio astronomers to think in terms of European collaboration as the way to proceed."


RadioNet funding will also support operations of the e-Merlin telescope array, via Trans-National Access, enabling others across Europe to make best use of this major new facility.

eMerlin is the upgrade to a network of seven radio astronomy stations - from Jodrell and its 76m Lovell Telescope in the North West, to Lords Bridge, just outside Cambridge in East Anglia.


By linking the stations together using optic-fibre cables, eMerlin can mimic a single super-sensitive radio-telescope spanning 217 km.


It has been described as the radio astronomy equivalent of the Hubble Space Telescope - a 'radio camera'

RadioNet involves 26 partners from 13 different countries.




. . . bravisimo


BOINC Wiki . . .

Science Status Page . . .
ID: 879686 · Report as offensive
Previous · 1 . . . 3 · 4 · 5 · 6

Message boards : SETI@home Science : . . . the Politics of Rights regarding Participation


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.