Boinc's Death Knell?

Message boards : Politics : Boinc's Death Knell?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 · Next

AuthorMessage
BarryAZ

Send message
Joined: 1 Apr 01
Posts: 2580
Credit: 16,982,517
RAC: 0
United States
Message 1128849 - Posted: 17 Jul 2011, 18:14:38 UTC - in response to Message 1128841.  

Again, this is part of the reason to avoid introducing CreditNew, particularly as a mandatory system. There is a resource question (they have other things needing doing), and CreditNew actually compels projects to do specific testing to see if CreditNew yields 'reasonably equitable' and consistent results -- and the projects also have other things needing doing.

I don't mind the concept behind CreditNew (aside from it being perhaps a significant resource drain on the developers) as an OPTION. Pushing it as mandatory strikes me as fraught with peril (operationally and politically).


They very well may, and could catch major bugs. But how many combinations of OS, CPUs, GPUs, RAM, etc., etc., etc. could they possibly test with such a small sampling? I am sure some in house testing does get done...then on to the limited pool of Alpha testers...and then it goes live on Seti.
I suspect that some bugs don't show themselves until you have thousands of computers involved in the 'test'.


ID: 1128849 · Report as offensive
Sirius B Project Donor
Volunteer tester
Avatar

Send message
Joined: 26 Dec 00
Posts: 24879
Credit: 3,081,182
RAC: 7
Ireland
Message 1128850 - Posted: 17 Jul 2011, 18:17:21 UTC
Last modified: 17 Jul 2011, 18:18:11 UTC

Ah good, last few posts have been interesting. They've imparted information that I, for one was unaware of.

From what I've read, it seems that GPU crunching is becoming more & more the mainstream of crunching, so just a personal thought.....

Is it possible for the Boinc devs to create branches? I.E., CPU & GPU Branches with each branch dedicated to it's relevant device? "If" - "Goto" springs to mind.

As for myself, I've never stated that I'm leaving the crunching world, just NOT on my own network. Fed up with having boinc problems & spending time to research the solutions, so just having "Authorised" customer machines running earlier versions that just work.
ID: 1128850 · Report as offensive
Profile GalaxyIce
Volunteer tester
Avatar

Send message
Joined: 13 May 06
Posts: 8927
Credit: 1,361,057
RAC: 0
United Kingdom
Message 1128853 - Posted: 17 Jul 2011, 18:27:32 UTC - in response to Message 1128850.  
Last modified: 17 Jul 2011, 18:30:14 UTC


Is it possible for the Boinc devs to create branches? I.E., CPU & GPU Branches with each branch dedicated to it's relevant device? "If" - "Goto" springs to

That's been worked on for a while. In the early days GPU in MW was gobbling up all the WUs leaving a shortage for CPU crunchers. Sending out longer WUs for GPU WUs was tried out and in Collatz you can even choose between shorter or longer WUs for GPU processing.

flaming balloons
ID: 1128853 · Report as offensive
Sirius B Project Donor
Volunteer tester
Avatar

Send message
Joined: 26 Dec 00
Posts: 24879
Credit: 3,081,182
RAC: 7
Ireland
Message 1128857 - Posted: 17 Jul 2011, 18:37:16 UTC

Nice to know that that's being worked on.

However, it brings up another issue with regards to testing....

Currently, the Boinc Downloads page is showing the following: -

6.12.33 Recommended Version

6.13.0 Development Version (May be unstable - USE only for Testing!)

This is quite understandable, but as this is stated on the main page, just why is Boinc Main & S@H used as testbeds?

Why not just dispose of the dev cycle & just use us all as beta testers? It seems that's what they're doing anyway. Hedging their bets by any chance?
ID: 1128857 · Report as offensive
Profile Sarge
Volunteer tester

Send message
Joined: 25 Aug 99
Posts: 12273
Credit: 8,569,109
RAC: 79
United States
Message 1128892 - Posted: 17 Jul 2011, 19:37:36 UTC - in response to Message 1128598.  

When I opened my classic account, the e-mail address I used was a .edu account. Once someone leaves an educational institution, the e-mail account only remains open for something like 6 months to a year, depending on the institution and what you did there. (Student? Faculty?)


There's your mistake. You thought that the email needed to be a currently active one to join a Classic account to a BOINC account. In fact, the old email does not need to be active to "merge" a Classic account with a BOINC account.


If that's the case, all that needed be done was the intermediary could have sent me a link with short instructions on how to accomplish it, or the admin could have provided the intermediary the link to then provide to me.

Furthermore, without the earlier e-mail address being valid anymore, how would one validate they indeed the Classic account?

Nevertheless, the merger was accomplished, and, as you said, that situation doesn't fall under what was being discussed here.

Glad we agree.
ID: 1128892 · Report as offensive
Profile Sarge
Volunteer tester

Send message
Joined: 25 Aug 99
Posts: 12273
Credit: 8,569,109
RAC: 79
United States
Message 1128894 - Posted: 17 Jul 2011, 19:40:23 UTC - in response to Message 1128611.  

Ah -- OK -- I thought you were saying your classic 'credits' were merged into your account -- they are listed separately -- as our mine -- I was a busy camper even back then so I have 168,000 work units and 905,547 hours. I wonder what that would be worth in CreditNew credits <smile>


Sorry that wasn't clear, Barry.
ID: 1128894 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1128967 - Posted: 17 Jul 2011, 21:39:14 UTC - in response to Message 1128892.  

If that's the case, all that needed be done was the intermediary could have sent me a link with short instructions on how to accomplish it, or the admin could have provided the intermediary the link to then provide to me.


The link has been on the front page since day one. That's how I was able to do it, and the email I used to originally sign up with SETI@Home wasn't functional for over 6 months. It sounds to me like the Admins were just trying to be helpful and friendly and went the extra mile to help you out.

Furthermore, without the earlier e-mail address being valid anymore, how would one validate they indeed owned the Classic account?


(My emphasis added as I presume that's the question you meant to ask)

They assume that the owner of the Classic account would know their email address they used to sign up. This does not prevent people from guessing email addresses and stealing Classic accounts to attach to BOINC accounts, but this practice is severely limited by the fact that you can only attach a single Classic account to a single BOINC account. So anyone malicious enough to try to steal Classic accounts had better hope that the one shot they get they find a good Classic account.

To explain the sign-up process further:

When you sign up for an account with most places, they ask you for your email address. Some places will send you a verification email that you must use in order to activate your account. Some places just take you at your word so long as you have an @ sign followed by a dotted domain name. SETI@Home is one of those places that simply take you for your word that the email address you provide is the right one. (Incidentally, this is how someone can create an account with a misspelled email address, because it is not verified by any SMTP server).

Because the email is simply stored in a database on SETI@Home's User Accounts server, it only checks to see if that email address you entered matches an account that was previously used at SETI@Home. If one is found in the local database, it "attaches" the old account information to the new one.

The email address is just the identifier (sometimes known as the anchor) in the local database. It doesn't matter if it's actually active or not unless you forget your password and attempt to have it recovered by sending the password to the account for which you no longer use. Then it becomes a problem.

It wouldn't matter if your current email address is deleted today. All you would have to do is log into SETI with your old email address and then change it using the email address change function found in your account.

Nevertheless, the merger was accomplished, and, as you said, that situation doesn't fall under what was being discussed here.

Glad we agree.


We do.
ID: 1128967 · Report as offensive
BarryAZ

Send message
Joined: 1 Apr 01
Posts: 2580
Credit: 16,982,517
RAC: 0
United States
Message 1129015 - Posted: 17 Jul 2011, 23:50:26 UTC - in response to Message 1128894.  

No problem


Ah -- OK -- I thought you were saying your classic 'credits' were merged into your account -- they are listed separately -- as our mine -- I was a busy camper even back then so I have 168,000 work units and 905,547 hours. I wonder what that would be worth in CreditNew credits <smile>


Sorry that wasn't clear, Barry.


ID: 1129015 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20291
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1129134 - Posted: 18 Jul 2011, 11:18:22 UTC - in response to Message 1128840.  
Last modified: 18 Jul 2011, 11:20:59 UTC

OK -- reasonably equitable sounds right.

... I think the issue there is that some of the hardware (particularly GPU hardware) can be utilized in ways that may well *legitimately* produce large credit numbers. I suspect that some of the thrust behind CreditNew is to 'control' the GPU project scores as they are seen by some folks at BOINC central as 'disturbing the force'. The problem here is that it appears that most of the force behind this comes from folks who don't have that much experience with designing applications for GPU use...


At the moment, we seem to credit using a loose combination of maximum integer + maximum float CPU compute performance + an arbitrary fudge factor for some arbitrary feeling of 'worth' compared to s@h classic WUs. Even for projects that do not do any computations!

We cannot usefully assign a number to the scientific value of the WUs unless you try to start costing how much money/time resource a project saves in funding by taking advantage of Boinc.

Hence, the various ideas that have been floated in the past to assign credit based upon measurable/calibrated compute resource utilised (actually used) by a project. That way, you get consistent credit awarding, and the users can measurably see what proportion of their hardware is utilised. It's then up to the projects as to how well they take advantage of the compute resource used to speed up their results.


... When a project starts at a certain 'credit level' and then over time optimizes code, should the credits for the same work unit be reduced because of the greater efficiency?...


There is the big controversy. We started out by counting how many WUs were crunched and so that has produced a legacy of trying to count some sort of arbitrary 'scientific value'. Hence, optimisations should produce a higher RAC. That leaves open the possibility for badly non-optimal test code to be run on Boinc that then gives an embarrassing speedup and unacceptably high credits for when the code is later optimised.

Awarding credits measured and calibrated on what hardware resource is usefully used will give consistent credits regardless of what hardware is used and regardless of optimisations. Note, that only if an optimisation usefully makes use of more of the available hardware will you see in increase in RAC.

Such a change would allow science to be done on the statistics for the credits and directly measure the compute resource available/used. However, public perceptions and politics may not allow that to be possible.

One example is for one of my test systems: RAC of about 1000 CPU-only, 150 000 GPU intensive (250 000 max possible), and bimbling along at about 10 000 with a mix of CPU + GPU Boinc projects. The RAC suggests I should go GPU-only, no-brainer! What I consider to be a better real-world mix for what projects I consider to be most worthwhile gives a mere 5% or so of the maximum possible RAC. (That test made the boinc plots unusable - the GPU project is a Mount Everest making all the other projects smear out to be a flat line at the base.)


Hence the dreamy arbitrary nature of the present credits system?

Honest rewarding of GPU capability may well be considered to be far too high and too demoralising for keeping CPU-only crunchers going. Hence, we are going to suffer credit subsidies to the CPU people for the sake of maintaining as wide a public outreach as possible?

GPU capability is significantly different to that of the CPU.


Happy fast crunchin',
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1129134 · Report as offensive
Profile Blurf
Volunteer tester

Send message
Joined: 2 Sep 06
Posts: 8962
Credit: 12,678,685
RAC: 0
United States
Message 1129221 - Posted: 18 Jul 2011, 15:04:00 UTC

For those concerned, the Milkyway Double Credit fundraiser has been cancelled.

Further specific discussion available at http://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=2511&nowrap=true#50182


ID: 1129221 · Report as offensive
Profile Gary Charpentier Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 25 Dec 00
Posts: 30653
Credit: 53,134,872
RAC: 32
United States
Message 1129253 - Posted: 18 Jul 2011, 15:33:22 UTC

Dr. Anderson asked it be canceled. http://boinc.berkeley.edu/dev/forum_thread.php?id=6728&nowrap=true#39074
Apparently it is not kosher with the goals of BOINC.

I'd suggest further discussion be on the BOINC boards so the BOINC developers can see it as they don't read project boards.


ID: 1129253 · Report as offensive
BarryAZ

Send message
Joined: 1 Apr 01
Posts: 2580
Credit: 16,982,517
RAC: 0
United States
Message 1129309 - Posted: 18 Jul 2011, 17:07:18 UTC - in response to Message 1129134.  

Martin, thanks for the explanation -- it seems that it explains a number of the issues some folks have raised with CreditNew.

It also provides, if you will, grist for the mill of those folks who suggest that CreditNew should be readily project optional, in as much as determinations centrally have a significant degree of the arbitrary (such as the discounting of GPU processing).

Not withstanding the hope to have some sort of centralized and equitable credit system -- one pushed from a central source, it seems apparent that the problems are very much there. That this system is in its own way arbitrary is pointed out by some of what you posted:



At the moment, we seem to credit using a loose combination of maximum integer + maximum float CPU compute performance + an arbitrary fudge factor for some arbitrary feeling of 'worth' compared to s@h classic WUs. Even for projects that do not do any computations!

We cannot usefully assign a number to the scientific value of the WUs unless you try to start costing how much money/time resource a project saves in funding by taking advantage of Boinc.

Hence, the various ideas that have been floated in the past to assign credit based upon measurable/calibrated compute resource utilised (actually used) by a project. That way, you get consistent credit awarding, and the users can measurably see what proportion of their hardware is utilised. It's then up to the projects as to how well they take advantage of the compute resource used to speed up their results.

There is the big controversy. We started out by counting how many WUs were crunched and so that has produced a legacy of trying to count some sort of arbitrary 'scientific value'. Hence, optimisations should produce a higher RAC. That leaves open the possibility for badly non-optimal test code to be run on Boinc that then gives an embarrassing speedup and unacceptably high credits for when the code is later optimised.

Awarding credits measured and calibrated on what hardware resource is usefully used will give consistent credits regardless of what hardware is used and regardless of optimisations. Note, that only if an optimisation usefully makes use of more of the available hardware will you see in increase in RAC.

Such a change would allow science to be done on the statistics for the credits and directly measure the compute resource available/used. However, public perceptions and politics may not allow that to be possible.

One example is for one of my test systems: RAC of about 1000 CPU-only, 150 000 GPU intensive (250 000 max possible), and bimbling along at about 10 000 with a mix of CPU + GPU Boinc projects. The RAC suggests I should go GPU-only, no-brainer! What I consider to be a better real-world mix for what projects I consider to be most worthwhile gives a mere 5% or so of the maximum possible RAC. (That test made the boinc plots unusable - the GPU project is a Mount Everest making all the other projects smear out to be a flat line at the base.)

Hence the dreamy arbitrary nature of the present credits system?

Honest rewarding of GPU capability may well be considered to be far too high and too demoralising for keeping CPU-only crunchers going. Hence, we are going to suffer credit subsidies to the CPU people for the sake of maintaining as wide a public outreach as possible?

GPU capability is significantly different to that of the CPU.


Happy fast crunchin',
Martin


ID: 1129309 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1129325 - Posted: 18 Jul 2011, 17:28:31 UTC

wow.. browsed through a MW users credits, running over 200K RAC with 2 5600 series ATI cards, quad core CPU..

Just.. wow. They are beyond generous with credits.

Well combined boinc credits now means absolute zero to me.


Janice
ID: 1129325 · Report as offensive
Profile skildude
Avatar

Send message
Joined: 4 Oct 00
Posts: 9541
Credit: 50,759,529
RAC: 60
Yemen
Message 1129328 - Posted: 18 Jul 2011, 17:32:32 UTC - in response to Message 1129325.  

MW is not generous, just highly optimized for ATI GPU use


In a rich man's house there is no place to spit but his face.
Diogenes Of Sinope
ID: 1129328 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1129331 - Posted: 18 Jul 2011, 17:35:23 UTC - in response to Message 1129328.  

MW is not generous, just highly optimized for ATI GPU use


Um.. sorry, do not buy it.
Janice
ID: 1129331 · Report as offensive
Profile Gary Charpentier Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 25 Dec 00
Posts: 30653
Credit: 53,134,872
RAC: 32
United States
Message 1129343 - Posted: 18 Jul 2011, 18:08:57 UTC - in response to Message 1129325.  

Well combined boinc credits now means absolute zero to me.

Haven't meant much to me since I first saw tables of relative credit from project to project.

ID: 1129343 · Report as offensive
Simplex0
Volunteer tester

Send message
Joined: 28 May 99
Posts: 124
Credit: 205,874
RAC: 0
Message 1129348 - Posted: 18 Jul 2011, 18:12:35 UTC - in response to Message 1129331.  
Last modified: 18 Jul 2011, 18:13:13 UTC

MW is not generous, just highly optimized for ATI GPU use


Um.. sorry, do not buy it.


In my opinion it has the overwhelmingly best written optimized code in the whole BOINC community and have from time to time reached 1.4 petaflop.


Regarding your example with the "2 5600 series ATI cards" i think you are wrong as this card does not support in hardware double precision calculation according to page 7 in this document


If you are interested in the development of the code there is some interesting reading here with Gipsel who was one of the the guys that actually wrote the machine code for the ATI GPU cards.


The result was increase of speed of nearly 10000 times faster than a single Core 2 processor @3GHz running the original program.
ID: 1129348 · Report as offensive
BarryAZ

Send message
Joined: 1 Apr 01
Posts: 2580
Credit: 16,982,517
RAC: 0
United States
Message 1129357 - Posted: 18 Jul 2011, 18:28:47 UTC - in response to Message 1129331.  

Right, neither does DA.

MW is not generous, just highly optimized for ATI GPU use


Um.. sorry, do not buy it.


ID: 1129357 · Report as offensive
Profile skildude
Avatar

Send message
Joined: 4 Oct 00
Posts: 9541
Credit: 50,759,529
RAC: 60
Yemen
Message 1129376 - Posted: 18 Jul 2011, 19:08:55 UTC - in response to Message 1129331.  

MW is not generous, just highly optimized for ATI GPU use


Um.. sorry, do not buy it.

it's true. You will run the same WU's on your GPU as your CPU. the CPU will take hours to finish the same work as a GPU. My GPU can finish a short MW WU in about 60 seconds. I get 213 credits for that. I could run the same WU on my CPU for about 5 hours and get the same credit.


In a rich man's house there is no place to spit but his face.
Diogenes Of Sinope
ID: 1129376 · Report as offensive
BarryAZ

Send message
Joined: 1 Apr 01
Posts: 2580
Credit: 16,982,517
RAC: 0
United States
Message 1129381 - Posted: 18 Jul 2011, 19:17:15 UTC

Perhaps one option is to recognize the different efficiencies by having two (or perhaps three) groupings for stats sites.

1) CPU
2) GPU (or perhaps:
ATI GPU and CUDA GPU

That way the actual processing work done with GPU's doesn't get *punished* for its efficiency, and the large number of CPU distributed process work doesn't get a disincentive for it.

Since, apparently, CreditNew won't be 'equitable' regarding GPU versus CPU (in other not to 'harm' the CPU users by a equitable credit allocation), why not recognize the difference and embrace it by offering a different ranking grouping (or retaining the existing 'combined score' ranking and also having the separate rankings as well.
ID: 1129381 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 · Next

Message boards : Politics : Boinc's Death Knell?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.