Cross Project Credit Equalization and Adjustment

Message boards : Number crunching : Cross Project Credit Equalization and Adjustment
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · Next

AuthorMessage
Profile Pappa
Volunteer tester
Avatar

Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 342918 - Posted: 20 Jun 2006, 4:23:54 UTC

If you look at the bulk of Einstien it is easy for them to be hidden until they go S5 only...

Digger: the HostID's that you posted will help...

Pappa

Please consider a Donation to the Seti Project.

ID: 342918 · Report as offensive
krgm
Volunteer tester

Send message
Joined: 2 Jun 05
Posts: 30
Credit: 72,152
RAC: 0
Canada
Message 342937 - Posted: 20 Jun 2006, 4:53:19 UTC

I am using an Athlon xp 2600+ (Barton). 5 short Einstein S5's so far (virtualy same time and credit)

~ 3900 seconds 20.00 credits = 18.5 / hr

Seti enhanced (only 2 results on file, averaged)

~ 26600 seconds 59.75 credits = 8.1 / hr

almost 2.3x
ID: 342937 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19960
Credit: 40,757,560
RAC: 67
United Kingdom
Message 342964 - Posted: 20 Jun 2006, 5:18:58 UTC - in response to Message 342882.  

P3 925 MHz
Seti 4.18 - 4.766 cr/hr (extrapolated from Beta forum_thread.php?id=434#4562
Seti Beta windows V5.12 7.82 cr/hr
Seti Enhanced between 3.3 and 4.8 cr/hr, average 4.22 cr/hr

Einstein,
Einstein (v 4.79) units 3.95 cr/hr
Albert (v 4.37) units 3.86 cr/hr

Pent M 1.86
Seti Beta windows ver 5.12
AR=0.53, 29.53 cr/hr
AR=2.8, 36.08 cr/hr

Seti Enhanced
common ar 14.42 cr/hr
High ar 17.92 cr/hr
Low ar all errored out

Einstein
Einstein (v 4.79) units 12.42 cr/hr
Albert (v 4.37) units 15.7 cr/hr

CPDN Sulphur 17.19 cr/hr

Andy

Update for Einstein

P3 925 MHz
Einstein 4.02
Short 7.7 cr/hr (average of 10 units)
Long two units done, one at 5.84 cr/hr
the second at 6.04 cr/hr

Andy


Forgot another update earlier

Seti Enhanced VLAR units for the Pent M
three units done at average of 11.2 cr/hr.

Andy
ID: 342964 · Report as offensive
Profile Digger
Volunteer tester

Send message
Joined: 4 Dec 99
Posts: 614
Credit: 21,053
RAC: 0
United States
Message 346756 - Posted: 23 Jun 2006, 19:26:10 UTC
Last modified: 23 Jun 2006, 19:41:05 UTC


Updated with Einstein S5 Long Workunit Data:

My Computer

Intel Celeron D 2.93Ghz
256K L2 cache
512MB ram

SETI Enhanced Results

7.40 credit/hour
Ratio = 1.0

Einstein S5 Results

Short S5: 13.81 credit/hour
Ratio = 1.87

Long S5: 13.73 credit/hour
Ratio = 1.86

Notes:

* Average of last six results for each project (where available)
* Stock applications all around
* Enhanced data are from SETI Beta

Dig

ID: 346756 · Report as offensive
Profile Digger
Volunteer tester

Send message
Joined: 4 Dec 99
Posts: 614
Credit: 21,053
RAC: 0
United States
Message 348485 - Posted: 25 Jun 2006, 18:16:53 UTC


Continuing previous post and adding Rosetta data:

My Computer

Intel Celeron D 2.93Ghz
256K L2 cache
512MB ram

SETI Enhanced Results

7.40 credit/hour
Ratio = 1.0

Einstein S5 Results

Short S5: 13.81 credit/hour
Ratio = 1.87

Long S5: 13.73 credit/hour
Ratio = 1.86

Rosetta Results

6.80 credit/hour
Ratio = .92

Notes:

* Average of last relevant results for each project
* Stock client and applications
* Enhanced data are from SETI Beta

I'm starting to test an Akos optimized application for Einstein so these are the last non-optimized comparisons I can provide you.

Dig

ID: 348485 · Report as offensive
Douglas Hoen

Send message
Joined: 31 May 06
Posts: 2
Credit: 851,536
RAC: 0
Canada
Message 353511 - Posted: 1 Jul 2006, 4:23:35 UTC

Sorry that I can't provide any data myself on this question, but I have just started "Boincing" and am mainly running one project per computer. But I would like to add two points to the discussion.

First, I would simply like to say that I agree with and appreciate the effort to provide an equitable cross-project credit granting scheme. Judging from some recent threads I have read (which, I must say, have been a scary and hopefully non-representative introduction for me to this community), the absence of such a scheme might severely reduce the computing power available to these distributed projects.

(As a newb, I ask that you please forgive the probable ignorance and over simplification of the following). Second, would it not be possible to build the collection of this type of statistic into Boinc and require that Boinc projects agree to adjust their credit granting schemes to level the playing field?

I realize that the optimization issue is an important complication here and it seems foolish not to reward optimizations that increase available computing resources. Perhaps if the credit granted to any given computer running 'official' (non-optimized) programs was as equal as possible between projects, there would still be incentives to create optimizations (which might eventually be incorporated into official programs, to the benefit of science).

-- Doug
ID: 353511 · Report as offensive
Profile Clyde C. Phillips, III

Send message
Joined: 2 Aug 00
Posts: 1851
Credit: 5,955,047
RAC: 0
United States
Message 353835 - Posted: 1 Jul 2006, 18:28:08 UTC

Before worrying about cross-project equalization things within Seti should be equalized. When it takes me 28,000 seconds to crunch a 56-cobblestone low-angle-range unit with the default Seticruncher and a Pentium D950 and somebody else with just a Pentium D930 and Crunch3r's cruncher only 9,000 seconds to do it, that is appalling. How can we equalize across projects when there is a 3-to-1 disparity completely within Seti?
ID: 353835 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 14015
Credit: 208,696,464
RAC: 304
Australia
Message 354039 - Posted: 2 Jul 2006, 0:06:24 UTC - in response to Message 353835.  

When it takes me 28,000 seconds to crunch a 56-cobblestone low-angle-range unit with the default Seticruncher and a Pentium D950 and somebody else with just a Pentium D930 and Crunch3r's cruncher only 9,000 seconds to do it, that is appalling. How can we equalize across projects when there is a 3-to-1 disparity completely within Seti?

Same credit for the same work- where's the problem?
If you want to do it faster
1 get an even fatser machine
2 use an optimised application
3 do both of the above.
Grant
Darwin NT
ID: 354039 · Report as offensive
Profile Pappa
Volunteer tester
Avatar

Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 354109 - Posted: 2 Jul 2006, 2:13:01 UTC - in response to Message 353511.  
Last modified: 2 Jul 2006, 2:13:32 UTC

Doug

Welcome to Seti BOINC

If you ask, I am a nobody that has been working my tail off on occasions to help move Both Seti and BOINC ahead in some small way... There are a couple of things "afoot" one is to keep Seti Going (funding) and two the larger move to minimize unfair credit claims... IF, you know where to look more people/projects are taking a larger look...

Not wanting to upset too many people, Seti is the "poster child" for Distributed Computing. They do more with less and somehow have made it fit.

Eric, is working to "adjust" the credit to closer to the original design goal. Several Users have been helping as there are several parts that need to be adjusted. Einstein also has started working to provide a more "unified" credit. This will end a bit of the confusion and hopefully allow "us Users" to feel we have a fair return for our computer time... Yes some only have one computer, many have several and others keep buying newer, faster, stronger machines. While the changes have caused a bit of controversy, things are moving in the right direction. It takes time.

I do have to say that in all my years in computing, the only "Stupid Question" is the one you carry away when you had the right person to ask in front of you. With the current atmosphere it may be harder to ask the question. Many will respond and try to get the answers... Please do not let those that detract sway you too much... For the most part Seti is very good about allowing "opinions" to be heard... Yes sometimes it can get out of hand...

Me, I am back to getting ready for a small neighborhood party (the 4th of July) and collecting stats... This does not mention starting to work the job market as my current contract is ending...

Have Fun, please if something causes a question then ask!

Keep Crunching

Pappa

Sorry that I can't provide any data myself on this question, but I have just started "Boincing" and am mainly running one project per computer. But I would like to add two points to the discussion.

First, I would simply like to say that I agree with and appreciate the effort to provide an equitable cross-project credit granting scheme. Judging from some recent threads I have read (which, I must say, have been a scary and hopefully non-representative introduction for me to this community), the absence of such a scheme might severely reduce the computing power available to these distributed projects.

(As a newb, I ask that you please forgive the probable ignorance and over simplification of the following). Second, would it not be possible to build the collection of this type of statistic into Boinc and require that Boinc projects agree to adjust their credit granting schemes to level the playing field?

I realize that the optimization issue is an important complication here and it seems foolish not to reward optimizations that increase available computing resources. Perhaps if the credit granted to any given computer running 'official' (non-optimized) programs was as equal as possible between projects, there would still be incentives to create optimizations (which might eventually be incorporated into official programs, to the benefit of science).

-- Doug


Please consider a Donation to the Seti Project.

ID: 354109 · Report as offensive
Profile Pappa
Volunteer tester
Avatar

Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 354125 - Posted: 2 Jul 2006, 2:45:02 UTC - in response to Message 353835.  

Clyde

Eric, setup this message thread to help correct these problems... His hope was that "users" could provide a representative sample that could be used to correct the issue. So beside the problem with Angle Ranges and not talking about "optimized applications" the work is to define what is correct for the computer that does the work... Eric also stated the had no problems with "optimizations" as long that they follow GPL...

If you read back, you will find that I am one of the people that helped Crunch3r create the first "optimized app" I am also the one that relayed his request that users stop using the optimized app. It was also stated that if you have the optimized app, you are allowed to use it. You, can not re-distribute it.

Not wanting to start new hate wars, there are things afoot that will at a point in time make it obsolete... Newer Versions of BOINC will have the capability of determining the CPU type to deliver an application that could be optimized for the CPU type... It does take time, it is still in Alpha Testing... It does affect All Projects, not just Seti. So welcome to the journey, I have been working since 2000. But then we all have to have a hobby...

If you desire, I can send a copy of the spreadsheet that I keep updated for Eric. It is in Excel but Open Office will be able to read it and see the data. IT covers data from 20 some odd machines that run Seti, Seti Beta and Einstein... That data covers several BOINC Core Clients (optimzied and unoptimized) and applications (optimized and unoptimized) over several months. It is a lot of hours working to insure that users receive "fair credit" for their computer time.

IF you or anyone desires to look, my public email address is al.setiboinc (at) gmail.com replace the (at) with the @ symbol...

Regards

Pappa

Before worrying about cross-project equalization things within Seti should be equalized. When it takes me 28,000 seconds to crunch a 56-cobblestone low-angle-range unit with the default Seticruncher and a Pentium D950 and somebody else with just a Pentium D930 and Crunch3r's cruncher only 9,000 seconds to do it, that is appalling. How can we equalize across projects when there is a 3-to-1 disparity completely within Seti?


Please consider a Donation to the Seti Project.

ID: 354125 · Report as offensive
Profile Clyde C. Phillips, III

Send message
Joined: 2 Aug 00
Posts: 1851
Credit: 5,955,047
RAC: 0
United States
Message 354540 - Posted: 2 Jul 2006, 18:44:06 UTC

Pappa, I saw someone (sorry to that someone- I don't remember the author) say that all flops are not created equal. The flops get more difficult in the following order: add, subtract, multiply, divide, cosine. If a weighting could be assigned to each type of flop that could possibly smooth things a bit across projects. But, identifying and weighting flops as well as counting them would add more overhead. Thanks for your response. If I try to make my own cruncher from Simon's instructions I'm sure to hit a snag somewhere and whatever I make will be obsolete when and if completed. If I don't try, or try and fail, it'll be two years before new stellar data comes out. That's Murphy's laws. Yes, I reread the Alfa news. It will require a completely new cruncher that will obsolete all existing ones per my understanding.
ID: 354540 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 354674 - Posted: 2 Jul 2006, 22:11:06 UTC - in response to Message 354540.  

Pappa, I saw someone (sorry to that someone- I don't remember the author) say that all flops are not created equal. The flops get more difficult in the following order: add, subtract, multiply, divide, cosine. If a weighting could be assigned to each type of flop that could possibly smooth things a bit across projects. But, identifying and weighting flops as well as counting them would add more overhead. Thanks for your response. If I try to make my own cruncher from Simon's instructions I'm sure to hit a snag somewhere and whatever I make will be obsolete when and if completed. If I don't try, or try and fail, it'll be two years before new stellar data comes out. That's Murphy's laws. Yes, I reread the Alfa news. It will require a completely new cruncher that will obsolete all existing ones per my understanding.

I believe that the someone was Papa. It is even more difficult - depending on the archeticture of the device, the ratios are different. On quite a large number of devices, addition and subtraction take the same amount of time. Other items that take a differing amount of time are exponentials and Logarithms.


BOINC WIKI
ID: 354674 · Report as offensive
Profile Pappa
Volunteer tester
Avatar

Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 354775 - Posted: 3 Jul 2006, 2:24:45 UTC - in response to Message 354674.  

John

Thank You, but I am afraid I can not take credit for this... I have presented a lot of facts that support it...

Pappa

Pappa, I saw someone (sorry to that someone- I don't remember the author) say that all flops are not created equal. The flops get more difficult in the following order: add, subtract, multiply, divide, cosine. If a weighting could be assigned to each type of flop that could possibly smooth things a bit across projects. But, identifying and weighting flops as well as counting them would add more overhead. Thanks for your response. If I try to make my own cruncher from Simon's instructions I'm sure to hit a snag somewhere and whatever I make will be obsolete when and if completed. If I don't try, or try and fail, it'll be two years before new stellar data comes out. That's Murphy's laws. Yes, I reread the Alfa news. It will require a completely new cruncher that will obsolete all existing ones per my understanding.

I believe that the someone was Papa. It is even more difficult - depending on the archeticture of the device, the ratios are different. On quite a large number of devices, addition and subtraction take the same amount of time. Other items that take a differing amount of time are exponentials and Logarithms.


Please consider a Donation to the Seti Project.

ID: 354775 · Report as offensive
Idefix
Volunteer tester

Send message
Joined: 7 Sep 99
Posts: 154
Credit: 482,193
RAC: 0
Germany
Message 355514 - Posted: 3 Jul 2006, 21:26:49 UTC - in response to Message 354125.  

Hi,
Newer Versions of BOINC will have the capability of determining the CPU type to deliver an application that could be optimized for the CPU type...
It looks like that the next question with regard to "Cross Project Credit Equalization and Adjustment" will arise: Which application of the optimized applications sets the "standard"? Which application is used to adjust the credit system of a project? You can already see what I mean at Einstein@home. The new Einstein application has already a basic detection of CPU types (with or without SSE?).

My AMD Athlon XP 2000+got an average of 7.20 granted credits per hour for the standard S4 workunits. Now it gets an average of 13.36 granted credits per hour for the standard S5 workunits. (The same computer gets an average of 5.10 granted credits per hour for the standard seti_enhanced workunits.)

It looks like that the S5 application with SSE disabled was used as the base for the adjustment of the new credit system. Computers with SSE capability are getting a bonus. I don't have a computer without SSE capability, so I cannot compare the standard S4 application with the standard SSE-less S5 application. But somebody mentioned in the Cruncher's Corner of Einstein that the CPU time of an AMD Athlon XP 1700+ without SSE capability was twice as long as the CPU time of an AMD Athlon XP 1600+ with SSE enabled.

If you are looking at the project stats of Einstein it looks like that quite a lot of users are getting this "SSE bonus". The credit production rate of Einstein@home has increased significantly since the change from S4 to S5.

It's still a very long way to the best credit system ...

Regards,
Carsten

ID: 355514 · Report as offensive
Profile Digger
Volunteer tester

Send message
Joined: 4 Dec 99
Posts: 614
Credit: 21,053
RAC: 0
United States
Message 356141 - Posted: 4 Jul 2006, 5:42:52 UTC


Eric,

Don't know how much help these are, but since we're back on stock apps at Einstein,
here are my updated comparisons:

My Computer

Intel Celeron D 2.93Ghz
256K L2 cache
512MB ram

SETI Enhanced Results

6.69 credit/hour
Ratio = 1.0

Einstein S5 Results

Short S5: 13.62 credit/hour
Ratio = 2.04

Long S5: 11.85 credit/hour
Ratio = 1.77

Rosetta Results

7.42 credit/hour
Ratio = 1.11

Notes:

* Average of last six results for each project (where available)
* Stock applications all around
* Enhanced data are from SETI Beta

Please feel free to track my hosts as needed.

Dig

ID: 356141 · Report as offensive
Eric Korpela Project Donor
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 3 Apr 99
Posts: 1385
Credit: 54,506,847
RAC: 60
United States
Message 358836 - Posted: 6 Jul 2006, 16:15:02 UTC

Thanks for all your help. A special thanks to Pappa for compiling the stats into something usable. I've adjusted the credit multiplier to 3.81 (up 14% from the original 3.35) which should better match the other projects. Version 5.17 should be going out to beta today. (Yes, I know I've said that before. This time I mean it.)

I'm also in discussion with some of the other projects about using host CPID to find cross project hosts to use as comparison machines. We'll probably need to add some info to the host table or change the meaning of some fields, so it'll take a lot of discussion and negotiation to figure it out.

Eric


ID: 358836 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 359721 - Posted: 8 Jul 2006, 3:23:08 UTC - in response to Message 358836.  

Thanks for all your help. A special thanks to Pappa for compiling the stats into something usable. I've adjusted the credit multiplier to 3.81 (up 14% from the original 3.35) which should better match the other projects. Version 5.17 should be going out to beta today. (Yes, I know I've said that before. This time I mean it.)

I'm also in discussion with some of the other projects about using host CPID to find cross project hosts to use as comparison machines. We'll probably need to add some info to the host table or change the meaning of some fields, so it'll take a lot of discussion and negotiation to figure it out.

Eric


It will probably require adding credits/hour for all projects to the RPC request and credits/hour for the particular project to the scheduler reply. Possibly limited to the last X hours of crunching.


BOINC WIKI
ID: 359721 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 360481 - Posted: 8 Jul 2006, 21:26:20 UTC - in response to Message 359721.  

Thanks for all your help. A special thanks to Pappa for compiling the stats into something usable. I've adjusted the credit multiplier to 3.81 (up 14% from the original 3.35) which should better match the other projects. Version 5.17 should be going out to beta today. (Yes, I know I've said that before. This time I mean it.)

I'm also in discussion with some of the other projects about using host CPID to find cross project hosts to use as comparison machines. We'll probably need to add some info to the host table or change the meaning of some fields, so it'll take a lot of discussion and negotiation to figure it out.

Eric


It will probably require adding credits/hour for all projects to the RPC request and credits/hour for the particular project to the scheduler reply. Possibly limited to the last X hours of crunching.

Perhaps adding RAH (Recent Average cpu-time/day in Hours) would be enough. It would be calculated on the same basis as RAC so that RAC/RAH would be recent credit/hour. A single CPU host crunching 24/7 for one project would have a RAH near 24 while multi-CPU hosts would be higher and part-time crunching lower.

This would add little to server load, when credit is granted the routines which now update RAC for the host and user would simply also update RAH. And it only requires one additional database entry.
                                                           Joe
ID: 360481 · Report as offensive
Profile Saenger
Volunteer tester
Avatar

Send message
Joined: 3 Apr 99
Posts: 2452
Credit: 33,281
RAC: 0
Germany
Message 364382 - Posted: 12 Jul 2006, 7:49:19 UTC

There is a (imho big) problem with Einsteins credits atm:
Here's my result page over there of now:
36097230 10629448 6 Jul 2006 16:32:02 UTC 9 Jul 2006 4:39:52 UTC Over Success Done 30,003.89 176.83 176.83
35936599 10554820 4 Jul 2006 18:13:52 UTC 6 Jul 2006 20:55:31 UTC Over Success Done 30,135.70 176.83 176.83
35662821 10428384 1 Jul 2006 8:35:00 UTC 5 Jul 2006 5:09:43 UTC Over Success Done 30,285.45 176.83 176.83
33903912 9698198 14 Jun 2006 15:16:14 UTC 16 Jun 2006 13:24:11 UTC Over Success Done 19,439.95 27.04 44.44

That's an average of 21 credits/h for S5 and 5 credit/h for S4.
It's all on the same machine, all with the same setup, all with stock application, so per definition all should get the same credit/h.
The amount for the S4 is in the same ballpark as my other projects (5 +/- 0.5), the S5 is far off target.

The reason given by one of the participants is, that it's because of optimisations in the application, but I don't use optimised apps, and everything straight from the project is per definition stock.

I don't know whether there is some communication between the projects on an admin level (between Eric and Bruce for example) on this issues, but I thought it would fit in this thread.

Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
ID: 364382 · Report as offensive
Astro
Volunteer tester
Avatar

Send message
Joined: 16 Apr 02
Posts: 8026
Credit: 600,015
RAC: 0
Message 364554 - Posted: 12 Jul 2006, 12:38:43 UTC

Saenger, what follows is Eric K's first post to a topic which is being discussed. I think they're working on it.

tony


[boinc_dev] Need for a cross project credit standard.... Inbox

Eric J Korpela to BOINC
More options Jun 29

Given the recent SETI@home credit/optimization flame wars and what is
happening with Einstein's recent apps, I think we need to come up with
a cross project credit standard. The original idea in BOINC was to
give credit for floating point, integer operations, disk space used,
etc. The primary problem with the method originally used to grant
credit was that it was based upon benchmarks that gave performance
that was unrealistically high for the real applications. When
SETI@home transitioned to granting credit based upon floating point
operations we had to stick in a multiplier of about 3.5X the floating
point operation count in order to match the credit given based upon
the benchmarks.

This caused an uproar for a variety of reasons. The first was that
the new version of SETI@home was more highly optimized than the old
version, so people running optimized versions couldn't claim 5X the
credit of people running unoptimized versions. The second was that
"fast machines" saw a decrease in credit granted per hour (primarily
because a 3GHz machine doesn't typically have memory that is 50%
faster than a 2GHz machine).

Einstein@home has also recently started using FLOP counting in its
applications. Perhaps in response to the furor on the SETI@home
forums, E@H grants significantly higher credit per hour (2X-4X) in its
FLOP counting version than in its older versions.

I worry that this lack of a standard is going to result in "credit
inflation" where in respose to actions by other projects and due to
complaints from a vocal minority of volunteers, projects are forced
into granting ever increasing number of credits per hour.

I think we need to develop a credit standard in order to prevent this.
This credit standard should 1) be measureable on a common machine
(possibly on every machine) 2) be publicly available 3) specify means
of comparing applications 4) reward optimization.

Lacking other suggestions, I propose to create a "standard"
(non-vectorized, Ooura FFT, gcc -O2 compiled) version of SETI@home
enhanced with a "standard" workunit as a credit standard. Based upon
run-time and floating point operations of this standard, other
projects can calibrate their floating point credit for a
(non-vectorized, gcc -O2 compiled) version of their own application.

If anyone has a better idea, or would prefer that there be no
standard, speak up.

Eric
ID: 364554 · Report as offensive
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · Next

Message boards : Number crunching : Cross Project Credit Equalization and Adjustment


 
©2026 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.