New Credit Adjustment?

Message boards : Number crunching : New Credit Adjustment?
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 7 · 8 · 9 · 10 · 11 · 12 · 13 . . . 17 · Next

AuthorMessage
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 790116 - Posted: 30 Jul 2008, 23:37:09 UTC - in response to Message 790086.  


The issue I see is what has become known as the 'credit multiplier' is really an 'equivalent FLOP' multiplier. IOWs it is used to calibrate the apps instrumented count of floating point operations to the reference Whetstone FLOP.


Funny, Ned will agree with you, but when I bring up the idea that other projects would really need to do flop counting too for it to be equivalent and ask if we're going to go back to benchmarks, I get static...

If I'm reading the table at BOINCstats correctly.

If you take SZTAKI, and multiply their credit by about 1.6, they'll grant cobblestones at about the 100 cobblestone standard.

If you take Riesel and multiply by something like 0.33, they'll grant credit at about the 100 cobblestone standard.

.... regardless of how they calculate credit.

So again, my question to you, Brian: the published standard for BOINC credit pays for work done. Do you think the published standard is wrong, and why?

Please respond in a language which is not fictional or extinct.
ID: 790116 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 790118 - Posted: 30 Jul 2008, 23:38:50 UTC - in response to Message 790110.  

Q. Will the credit I get from my current machine go down as the middle of the road machine gets faster?
A. If you use a stock app, 6 months from now the machine you are using now will still get about the same number of credits per second that it will a month from now.

OK, I understand that.

Q. If you use an optimized app, 6 months from now will the machine you are using now still get about the same number of credits per second that it does now?

If the stock application gets faster compared to the standard application, your credit would go down (as happened when 5.x was released).

If the optimized application gets faster compared to the standard application, your credit would go up.
ID: 790118 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 790209 - Posted: 31 Jul 2008, 3:11:38 UTC - in response to Message 790068.  

maybe someone should post a list of which projects give high credit compared to seti.
...

The cross-project comparison from http://boinc.netsoft-online.com/e107_plugins/boinc/get_cpcs.php fulfills that role fairly well. Boincstats also has displays of the same data. The main difficulty is having the huge table crossing both ways, so here's an extract simply showing the relative credit rates at projects with at least (1000) hosts in common with S@H:

0.627 (1247) proteins@home
0.690 (1629) LHC@home
0.713 (2484) Leiden Classical
0.720 (15345) World Community Grid
0.732 (2742) ABC@home
0.733 (4612) climateprediction.net
0.741 (7656) Spinhenge@home
0.756 (15944) Rosetta@home
0.758 (3016) uFluids
0.788 (4022) SETI@home Beta
0.803 (5541) malariacontrol.net
0.804 (1125) Superlink@Technion
0.900 (1716) POEM@HOME
0.902 (3559) TANPAKU
0.998 (3599) PrimeGrid
1.054 (2476) SHA-1 Collision Search Graz
1.134 (3536) SIMAP
1.231 (1056) Intelligence Realm
1.266 (39198) Einstein@Home
1.426 (6702) QMC@HOME
1.663 (3608) MilkyWay@home
1.917 (1724) RieselSieve
2.023 (3884) Cosmology@Home

SETI@home of course gives a 1.000 comparison to itself, and the (332369) hosts in common is a clear indication that only active hosts are being considered.
                                                              Joe

ID: 790209 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19078
Credit: 40,757,560
RAC: 67
United Kingdom
Message 790220 - Posted: 31 Jul 2008, 3:44:37 UTC - in response to Message 790209.  

maybe someone should post a list of which projects give high credit compared to seti.
...

The cross-project comparison from http://boinc.netsoft-online.com/e107_plugins/boinc/get_cpcs.php fulfills that role fairly well. Boincstats also has displays of the same data. The main difficulty is having the huge table crossing both ways, so here's an extract simply showing the relative credit rates at projects with at least (1000) hosts in common with S@H:

0.627 (1247) proteins@home
0.690 (1629) LHC@home
0.713 (2484) Leiden Classical
0.720 (15345) World Community Grid
0.732 (2742) ABC@home
0.733 (4612) climateprediction.net
0.741 (7656) Spinhenge@home
0.756 (15944) Rosetta@home
0.758 (3016) uFluids
0.788 (4022) SETI@home Beta
0.803 (5541) malariacontrol.net
0.804 (1125) Superlink@Technion
0.900 (1716) POEM@HOME
0.902 (3559) TANPAKU
0.998 (3599) PrimeGrid
1.054 (2476) SHA-1 Collision Search Graz
1.134 (3536) SIMAP
1.231 (1056) Intelligence Realm
1.266 (39198) Einstein@Home
1.426 (6702) QMC@HOME
1.663 (3608) MilkyWay@home
1.917 (1724) RieselSieve
2.023 (3884) Cosmology@Home

SETI@home of course gives a 1.000 comparison to itself, and the (332369) hosts in common is a clear indication that only active hosts are being considered.
                                                              Joe


My first thought looking at those figures is why SetiBeta at 0.788?
Using optimised apps is discouraged.
Most of the multibeam work for the last few months has been VHAR, which grants higher than average.
Are Astropulse claims very low, but until very recently AP was using the BM * time method, and there are still unvalidated tasks in the system that used this method.

Or is the optimised app making the difference on Seti. It shouldn't because there are actually very few of us using the optimised apps.
ID: 790220 · Report as offensive
Alinator
Volunteer tester

Send message
Joined: 19 Apr 05
Posts: 4178
Credit: 4,647,982
RAC: 0
United States
Message 790228 - Posted: 31 Jul 2008, 4:00:12 UTC - in response to Message 790220.  
Last modified: 31 Jul 2008, 4:07:21 UTC


My first thought looking at those figures is why SetiBeta at 0.788?
Using optimised apps is discouraged.
Most of the multibeam work for the last few months has been VHAR, which grants higher than average.
Are Astropulse claims very low, but until very recently AP was using the BM * time method, and there are still unvalidated tasks in the system that used this method.

Or is the optimised app making the difference on Seti. It shouldn't because there are actually very few of us using the optimised apps.


Well the first thing which comes to my mind would be that a large percentage of the folks who run Beta run optimized here on main. Remember running optimized apps on Beta is generally frowned on, since it defeats the purpose of testing and evaluating the stock app being worked on.

Therefore, since to be included in the CPP table the host has to run both projects, that might account for some of the difference.

<edit> Your comment on the effect of AP is interesting though. I haven't crunched on Beta for a while, so I can't say anything about what the rates for AP would be compared to MB for my hosts. I guess we'll see if that effects the CPP ratio once AP becomes more widespread here in the workstream.

Alinator
ID: 790228 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 790230 - Posted: 31 Jul 2008, 4:02:15 UTC - in response to Message 790220.  

...
Or is the optimised app making the difference on Seti. It shouldn't because there are actually very few of us using the optimised apps.

My guess is that a very large proportion of those doing Beta do run optimized here, so that comparison based on hosts in common is distinctly biased. The "active host" count for Beta almost exactly matches the number in common with SETI@home main. Bad sampling, in statistical terms.
                                                                 Joe
ID: 790230 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19078
Credit: 40,757,560
RAC: 67
United Kingdom
Message 790232 - Posted: 31 Jul 2008, 4:10:04 UTC - in response to Message 790230.  

...
Or is the optimised app making the difference on Seti. It shouldn't because there are actually very few of us using the optimised apps.

My guess is that a very large proportion of those doing Beta do run optimized here, so that comparison based on hosts in common is distinctly biased. The "active host" count for Beta almost exactly matches the number in common with SETI@home main. Bad sampling, in statistical terms.
                                                                 Joe

Hadn't thought of that, but it is logical.
The deduction from that is, if everyone used the optimised apps here, Seti's thoughput and credits would rise by 1.000/0.788 = 1.269.
ID: 790232 · Report as offensive
Eric Korpela Project Donor
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 3 Apr 99
Posts: 1382
Credit: 54,506,847
RAC: 60
United States
Message 790245 - Posted: 31 Jul 2008, 4:44:18 UTC - in response to Message 789294.  


It is really two fold, one the introduction of another app into Seti that being Astropulse, IMO it should be another project as in "astropulse@home" after all that is one of the advantages of the BOINC concept. I see its introduction into seti@home as an attempt to guarantee its success by forcing it down the throats of a captive audience at the expense of another balancing act.


The larger issue for us is more that we can't afford the overhead of maintaining another BOINC project in addition to the main project and the beta project. We're planning on offering the ability to choose which applications you want to run, and I was hoping it would have been ready before we released Astropulse. But Josh is getting anxious to get his degree before he dies of old age, so we released before that feature was ready.

Regarding credit valuation and cross project normalization, even leaders need to bow to political reality at times. And this is the political reality as it stands.
@SETIEric@qoto.org (Mastodon)

ID: 790245 · Report as offensive
Eric Korpela Project Donor
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 3 Apr 99
Posts: 1382
Credit: 54,506,847
RAC: 60
United States
Message 790246 - Posted: 31 Jul 2008, 4:46:23 UTC - in response to Message 790220.  


My first thought looking at those figures is why SetiBeta at 0.788?


Two reasons. We made a factor of two error in the amount of credit that we granted for one version of Astropulse. A factor of two too high.

The second reason is that SETI@home 6.02 was faster than expected. SETI@home 6.03 for windows went out tonight. We'll see if the trend holds.

Eric
@SETIEric@qoto.org (Mastodon)

ID: 790246 · Report as offensive
Alinator
Volunteer tester

Send message
Joined: 19 Apr 05
Posts: 4178
Credit: 4,647,982
RAC: 0
United States
Message 790260 - Posted: 31 Jul 2008, 5:40:05 UTC
Last modified: 31 Jul 2008, 5:41:57 UTC

Regarding Item 1 in the first post: Would it be possible to export separate stat xml's for AP and MB then? This would at least address the issue of 'lumping' them together as an aggregate project for scoring and competition purposes.

In the Second Post: Wouldn't that kind of mistake (over granting) tend to drive Beta greater than one compared to SAH in the CPP table (as viewed from SAH's row)?

Alinator
ID: 790260 · Report as offensive
UncleVom

Send message
Joined: 25 Dec 99
Posts: 123
Credit: 5,734,294
RAC: 0
Canada
Message 790267 - Posted: 31 Jul 2008, 6:06:35 UTC - in response to Message 790245.  


It is really two fold, one the introduction of another app into Seti that being Astropulse, IMO it should be another project as in "astropulse@home" after all that is one of the advantages of the BOINC concept. I see its introduction into seti@home as an attempt to guarantee its success by forcing it down the throats of a captive audience at the expense of another balancing act.


The larger issue for us is more that we can't afford the overhead of maintaining another BOINC project in addition to the main project and the beta project. We're planning on offering the ability to choose which applications you want to run, and I was hoping it would have been ready before we released Astropulse. But Josh is getting anxious to get his degree before he dies of old age, so we released before that feature was ready.

Regarding credit valuation and cross project normalization, even leaders need to bow to political reality at times. And this is the political reality as it stands.


Thanks for the explanation regarding the injection of Astropulse into the project. So it more or less comes down to a lack of funding and rather ragged timing.

As for the cross project valuation and direct comparison, I really fail to see the value in it, especially in light of the problems it causes. Perhaps leaders have to face reality, political or otherwise and just drop it.

BOINC does make an excellent framework for projects and allows for time sharing between them, to me those seem to be the important bits.
With the large diversity of projects, the differing, sometimes changing use of the distributed computers and the large variation in machines and clients, the credit comparison is really just annoying fluff.

With enough political pressure I'm sure things can be made to appear to be in step even if the participants are not happy about it and it causes internal strife within the projects. One has to ask why is it so darned important?

Marcus










ID: 790267 · Report as offensive
Brian Silvers

Send message
Joined: 11 Jun 99
Posts: 1681
Credit: 492,052
RAC: 0
United States
Message 790268 - Posted: 31 Jul 2008, 6:12:57 UTC - in response to Message 790116.  


So again, my question to you, Brian: the published standard for BOINC credit pays for work done. Do you think the published standard is wrong, and why?

Please respond in a language which is not fictional or extinct.


The reason I chose the Tamarians from Star Trek: TNG, was because you seem to have problems understanding my normal English text.

This is no different from the past. I am not opposed to the concept of equivalent payout of credits across BOINC as a whole. I just do not think that benchmarks or relying on an inaccurate chart at BOINC Combined Statistics is the way to get there.

There is nothing more to it. You want to make more of it than there is. You want to make it into that I'm all in favor of this, that, or the other. Your approach is one of "if you're not with me, then you're against me". I just think that there is a better way than relying on inaccurate data and hoping that a large sample will make the inaccuracies be statistically insignificant.
ID: 790268 · Report as offensive
Brian Silvers

Send message
Joined: 11 Jun 99
Posts: 1681
Credit: 492,052
RAC: 0
United States
Message 790276 - Posted: 31 Jul 2008, 6:32:42 UTC - in response to Message 790209.  

maybe someone should post a list of which projects give high credit compared to seti.
...

The cross-project comparison from http://boinc.netsoft-online.com/e107_plugins/boinc/get_cpcs.php fulfills that role fairly well. Boincstats also has displays of the same data. The main difficulty is having the huge table crossing both ways, so here's an extract simply showing the relative credit rates at projects with at least (1000) hosts in common with S@H:

0.627 (1247) proteins@home
0.690 (1629) LHC@home
0.713 (2484) Leiden Classical
0.720 (15345) World Community Grid
0.732 (2742) ABC@home
0.733 (4612) climateprediction.net
0.741 (7656) Spinhenge@home
0.756 (15944) Rosetta@home
0.758 (3016) uFluids
0.788 (4022) SETI@home Beta
0.803 (5541) malariacontrol.net
0.804 (1125) Superlink@Technion
0.900 (1716) POEM@HOME
0.902 (3559) TANPAKU
0.998 (3599) PrimeGrid
1.054 (2476) SHA-1 Collision Search Graz
1.134 (3536) SIMAP
1.231 (1056) Intelligence Realm
1.266 (39198) Einstein@Home
1.426 (6702) QMC@HOME
1.663 (3608) MilkyWay@home
1.917 (1724) RieselSieve
2.023 (3884) Cosmology@Home

SETI@home of course gives a 1.000 comparison to itself, and the (332369) hosts in common is a clear indication that only active hosts are being considered.
                                                              Joe



Cosmology is currently granting me a level equivalent or only slightly higher than LHC. That 2.023 should be in fairly rapid free-fall, as soon as hosts start reporting in. The runtimes increased by 4-5X, with credit only increasing 0.4X (from 50 to 70), thus reporting has been slowed because BOINC thinks it has way too much work and/or people have bailed on processing tasks.

Both Einstein and SETI have optimized apps that are not available by default download, meaning the user must actively install an optimized app. It would seem that the faster apps would inflate the value, thus making things appear "too high", but the people running standard apps would then end up getting short-changed if credit was adjusted downwards without determining the impact that optimized applications had upon the set average.

Also in regards to Einstein, they have made more strides than SETI has (I mean actual staff, not volunteers) in regards to whole-application optimization.

Another issue at Einstein is the delta in performance of the Windows application and the Linux application. Since Windows users are in the majority, they mask the faster Linux application.

Again, with Einstein and with here, claimed credit is equal to granted credit (most of the time here, all of the time at Einstein). I do not know if these values equate to bm * time methodologies.


ID: 790276 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19078
Credit: 40,757,560
RAC: 67
United Kingdom
Message 790291 - Posted: 31 Jul 2008, 8:09:07 UTC - in response to Message 790246.  
Last modified: 31 Jul 2008, 8:49:29 UTC


My first thought looking at those figures is why SetiBeta at 0.788?


Two reasons. We made a factor of two error in the amount of credit that we granted for one version of Astropulse. A factor of two too high.

The second reason is that SETI@home 6.02 was faster than expected. SETI@home 6.03 for windows went out tonight. We'll see if the trend holds.

Eric

There is something not right here.

The figure for SetiBeta should be higher if the AP credits were 2* higher than expected.
My own observations say the credits for an AP units should be in the region of 1200 +/- 20% based on time to complete and average cr/time for MB units done with stock applications on Beta.

6.02 I suspect is not faster than previous versions by any significant factor, it is due to the fact that 99% of all units processed with 6.02 are VHAR. Credits/time for VHAR units are up to 2* those for mid range units.
AR = 0.53, 20.5/hr
AR = 2.17 (VHAR), 38.75cr/hr
In over 2,000 units crunched by host 351 with v6.02 I could only find 5 units that were not VHAR.

If what you say is true about AP being 2* correct figure and the large number of VHAR units with a more efficient v6.02 then SetiBeta figure should be even lower, probably down to 0.5.


edit] that last para is wrong the SetiB figure should be HIGHER.
ID: 790291 · Report as offensive
Juha
Volunteer tester

Send message
Joined: 7 Mar 04
Posts: 388
Credit: 1,857,738
RAC: 0
Finland
Message 790506 - Posted: 31 Jul 2008, 18:25:38 UTC - in response to Message 790291.  

If what you say is true about AP being 2* correct figure and the large number of VHAR units with a more efficient v6.02 then SetiBeta figure should be even lower, probably down to 0.5.


edit] that last para is wrong the SetiB figure should be HIGHER.


If most Beta participants crunch with stock at Beta and optimized here and if v8 is about twice as fast as stock then wouldn't the "correct" figure for Beta be somewhere close to 0.5? (A host gets half the credit per time unit on Beta than here.)

AP in general increases the figure (no AK app for AP) and those overcredited AP workunits even more?

-Juha
ID: 790506 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 790507 - Posted: 31 Jul 2008, 18:27:24 UTC - in response to Message 790268.  
Last modified: 31 Jul 2008, 18:28:26 UTC

There is nothing more to it. You want to make more of it than there is. You want to make it into that I'm all in favor of this, that, or the other.

The reason I ask for your position on various concepts is that I really can't tell what your reasoning might be.

Much of the time it appears to be "This is part of Dr. Anderson's evil agenda for BOINC domination -- so it's bad."

In one post you say "look how they're 'sneaking in' equalizing credit across projects" and then "I'm in favor of equal credit across projects in general."

Then, you dive into the details and say "it can't possibly work when some projects use benchmarks * time, and some use flops."

It's not perfect, but it's better than what we have now:

Each project releases a science application, and that application calculates credit by some method, and in general scales the result to (theoretically) match earlier applications.

If they get it wrong, they have to release a new version, with a different multiplier, determined more or less by hand.

This change adds a method to adjust the clients that are "in the wild" without having to push an upgrade, and an algorithm to let BOINC dial in the credits.

It is also server-side, so it can be modified if/as needed without having to go through the hassles of deploying a new version to a bunch of clients.
ID: 790507 · Report as offensive
Brian Silvers

Send message
Joined: 11 Jun 99
Posts: 1681
Credit: 492,052
RAC: 0
United States
Message 790520 - Posted: 31 Jul 2008, 18:51:32 UTC - in response to Message 790507.  


Much of the time it appears to be "This is part of Dr. Anderson's evil agenda for BOINC domination -- so it's bad."


At the behest of Ozzfan, I gave David the chance to explain his reasons behind the notion of manipulating the data at the stat site level. David elected to tell me that I "didn't understand" and there was not any "true credit" that he was talking about manipulating. Bottom line there is that he admitted to a desire to engage in manipulation of some sort, just that it took place outside of BOINC and at a stat site level. As such, I do not have respect for the man. He may have achieved a lot, and he may be very smart, but I cannot respect him. This also means that since he was so willing to and could not see the problem with manipulating the data, I also feel that I cannot trust data coming from him to be real and factual.

I'm hard on David for a reason Ned...


It's not perfect, but it's better than what we have now:


Is "doing something is better than doing nothing" always true? There are risks in both. If you "do something", you could mess it up worse than it is now if it is not properly thought through. If you "do nothing", you run the risk of things getting worse on their own.

I have a theory about something that makes that chart at BOINC Combined Statistics unreliable. I'm going to monitor for a few days and I'll mention what I'm thinking about in a few days.
ID: 790520 · Report as offensive
Eric Korpela Project Donor
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 3 Apr 99
Posts: 1382
Credit: 54,506,847
RAC: 60
United States
Message 790521 - Posted: 31 Jul 2008, 18:52:22 UTC - in response to Message 790291.  
Last modified: 31 Jul 2008, 18:57:25 UTC


If what you say is true about AP being 2* correct figure and the large number of VHAR units with a more efficient v6.02 then SetiBeta figure should be even lower, probably down to 0.5.


Yes, because the ratio depends upon the type of workunit which increases the difficulty of figuring this out is one reason we need the multiplier to be automatically calculated.

[edit] that last para is wrong the SetiB figure should be HIGHER.


Yes I got confused by the direction of the numbers. I was assuming that, as in a row across the table that was linked to, lower numbers meant higher credit. A lot of what that number means depends upon the calculation method for credit_per_cpu_second. Up until Astropulse 4.33, credit was granted based upon CPU time rather than FLOP counting. I haven't looked how credit_per_cpu_second is averaged, whether it's a time based average or a workunit based average. If it's time based, a few astropulse workunits can overwhelm it. If it's workunit based, then astropulse work units would hardly count.

Anyway here is a plot I promised a while back. The green lines are the floating point benchmarks, the red ones are the integer benchmarks, the blue is the sum of the benchmarks and the yellow is the rate of granted credit. The purple lines are the RAC, which should match the yellow line if every host crunched 100% of the time.

There are two lines for each. The top line for each is the average (and is supposed to be dotted, but there are too many data points for that to happen. The bottom is the median.



If we were granting perfect credit, the yellow and blue lines would lie on top of one another. The region about a month ago where they come close is when we were only crunching astropulse and only giving credit based upon CPU time.

This is the same plot for the public project.



The large excursions of the averages (due to hosts reporting bad values) shows why we need to use medians rather than averages.
@SETIEric@qoto.org (Mastodon)

ID: 790521 · Report as offensive
Brian Silvers

Send message
Joined: 11 Jun 99
Posts: 1681
Credit: 492,052
RAC: 0
United States
Message 790522 - Posted: 31 Jul 2008, 18:58:57 UTC - in response to Message 790209.  


2.023 (3884) Cosmology@Home


As I mentioned, the value should begin a rapid freefall...and it already has.

1.973 (3867)

Down almost 2.5% in a mere 15 hours.
ID: 790522 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 790552 - Posted: 31 Jul 2008, 20:37:34 UTC - in response to Message 790520.  
Last modified: 31 Jul 2008, 20:38:25 UTC


Much of the time it appears to be "This is part of Dr. Anderson's evil agenda for BOINC domination -- so it's bad."


At the behest of Ozzfan, I gave David the chance to explain his reasons behind the notion of manipulating the data at the stat site level. David elected to tell me that I "didn't understand" and there was not any "true credit" that he was talking about manipulating. Bottom line there is that he admitted to a desire to engage in manipulation of some sort, just that it took place outside of BOINC and at a stat site level. As such, I do not have respect for the man. He may have achieved a lot, and he may be very smart, but I cannot respect him. This also means that since he was so willing to and could not see the problem with manipulating the data, I also feel that I cannot trust data coming from him to be real and factual.

I'm hard on David for a reason Ned...

... and I really don't care, unless your only reason to object to something is "because Dr. Anderson might be behind it."

If that's the case, then all of your "technical" arguments are called to question.

... and since your position isn't consistent, that seems to be the case.

I don't know Dr. Anderson. I do know that he has two groups that he has to try to keep happy: the scientists and administrators of every BOINC project, and less-directly, the users at every BOINC project.

There are a lot of projects and people running them, and there are a whole lot of us.

With all due respect, how much would he accomplish if he opened a dialog with every Brian Silvers on the planet?

Milo Bloom said "The first sign of a nervous breakdown is when you start thinking your work is terribly important."

It's not perfect, but it's better than what we have now:


Is "doing something is better than doing nothing" always true? There are risks in both. If you "do something", you could mess it up worse than it is now if it is not properly thought through. If you "do nothing", you run the risk of things getting worse on their own.

The only thing missing from that comment is "so, why bother making changes."

... and if the projects (and BOINC developers) take that attitude, then we have what we've got, with no further chance of improvement.
ID: 790552 · Report as offensive
Previous · 1 . . . 7 · 8 · 9 · 10 · 11 · 12 · 13 . . . 17 · Next

Message boards : Number crunching : New Credit Adjustment?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.