Suggestion to make s@hE more efficient and more user friendly

Message boards : Number crunching : Suggestion to make s@hE more efficient and more user friendly
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
EricVonDaniken

Send message
Joined: 17 Apr 04
Posts: 177
Credit: 67,881
RAC: 0
United States
Message 339146 - Posted: 16 Jun 2006, 12:13:10 UTC
Last modified: 16 Jun 2006, 12:45:50 UTC

Since s@hE tasks have variable completion times that are dependent on AR, and since sending "tiny" tasks to faster machines or "huge" tasks to slower machines represents an "impedance miss-match":

On the server side:
Use the "average turnaround time" and "Measured Floating point" field to sort known active clients (breaking ties with the "measured integer" field.
Sort available tasks according to estimated completion time.
The fastest client gets the task with the longest estimated running time, the slowest client gets the task with the shortest estimated running time, etc etc

Everyone gets a task that is matched as closely as possible to their relative crunching ability.

on the client side:
add a new field to the seti preferences area that allows a user to ask for smaller or larger tasks.
This way 24x7 speed demons can ask to attavk the harder tasks and occasional much slower crunchers can avoid potential deadline problems by going after tasks more suited to their usage pattern and/or capabilities.
ID: 339146 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19849
Credit: 40,757,560
RAC: 67
United Kingdom
Message 339176 - Posted: 16 Jun 2006, 12:25:08 UTC

Not a good idea, it has already been noted here and on other projects a lot of people prefer short rather than long units. So you would probably end up with some of these people with fast machines sucking up all the short units and leaving none for the people with slower hardware.
ID: 339176 · Report as offensive
EricVonDaniken

Send message
Joined: 17 Apr 04
Posts: 177
Credit: 67,881
RAC: 0
United States
Message 339204 - Posted: 16 Jun 2006, 12:43:35 UTC - in response to Message 339176.  
Last modified: 16 Jun 2006, 12:45:07 UTC

Not a good idea, it has already been noted here and on other projects a lot of people prefer short rather than long units. So you would probably end up with some of these people with fast machines sucking up all the short units and leaving none for the people with slower hardware.


I took that into account when I said people could =ask= for easier or harder tasks while the server =decides= what tasks you get.

The server basing task allocation on historical turnaround time and measured FP + possibly measured int performance makes it nigh unto impossible for anyone to "cheat" and avoid their fair share of work.

As you've mentioned, this is a problem accross BOINC, not just here at seti.
Therefore it seems logical that there is value in attemting to address it, yes?
ID: 339204 · Report as offensive
Profile Jim-R.
Volunteer tester
Avatar

Send message
Joined: 7 Feb 06
Posts: 1494
Credit: 194,148
RAC: 0
United States
Message 339205 - Posted: 16 Jun 2006, 12:44:24 UTC - in response to Message 339146.  
Last modified: 16 Jun 2006, 12:51:51 UTC

Since s@hE tasks have variable completion times that are dependent on AR, and since sending "tiny" tasks to faster machines or "huge" tasks to slower machines represents an "impedance miss-match":

On the server side:
Use the "average turnaround time" and "Measured Floating point" field to sort known active clients (breaking ties with the "measured integer" field.
Sort available tasks according to estimated completion time.
The fastest client gets the task with the longest estimated running time, the slowest client gets the task with the shortest estimated running time, etc etc

Everyone gets a task that is matched as closely as possible to their relative crunching ability.

on the client side:
add a new filed to the seti preferences area that allows a user to ask for smaller or larger tasks.
This way 24x7 speed demons can ask to attavk the harder tasks and occasional much slower crunchers can avoid potential deadline problems by going after tasks more suited to their usage pattern and/or capabilities.

I totally agree with Winterknight, and this is not necessary anyway as the due dates are depending on the estimated amount of actual processing for a specific angle range, any computer that is capable of completing *any* work unit in the alloted time *should* be able to complete *all* work units within the deadline. Due to the specific peculiarities of some AR's, this is not always true, but it will be true in the vast majority of cases. Also there would be a large increase in processing and database storage needed to introduce something as complicated as this. You would have to add extra fields to the database to hold the information plus the code to enter and read the preferences plus the extra code in the scheduler to select certain AR's for ceertain computers. A big increase in work load that as I already mentioned is totally unnecessary.
Jim

Some people plan their life out and look back at the wealth they've had.
Others live life day by day and look back at the wealth of experiences and enjoyment they've had.
ID: 339205 · Report as offensive
EricVonDaniken

Send message
Joined: 17 Apr 04
Posts: 177
Credit: 67,881
RAC: 0
United States
Message 339211 - Posted: 16 Jun 2006, 12:52:05 UTC
Last modified: 16 Jun 2006, 12:53:27 UTC

Jim, there clearly is a problem and it clearly is not restricted to seti given the threads / posts I've been seeing on the topic.

In addition, since the whole point of the BOINC approach is to make use of what under other circumstances would be wasted compute power, =most= crunchers are going to be more casual or using slower hosts.
Having these guys choke on tasks that are too demanding is basically driving them away from BOINC. That doesn't make sense.

So either we need to make sure that the largest task is capable of running to completion on the most casual and slowest host, which I do not think is realistic, or we have to implement some form of task load balancing.


ID: 339211 · Report as offensive
Profile Pooh Bear 27
Volunteer tester
Avatar

Send message
Joined: 14 Jul 03
Posts: 3224
Credit: 4,603,826
RAC: 0
United States
Message 339212 - Posted: 16 Jun 2006, 12:53:32 UTC

The issue with not randomly giving people work is that the people with faster machines getting the longer WUs will make some of them leave (it's been already discussed by several). People want will move to shorter WU projects.

I also believe that if this happened, people will write their own BOINC application to fake the system into thinking they have a slower machine, by skewing the benchmarks, and sending fake processor IDs to the project just to get the shorter WUs.

If people take all the shorter WUs, then they run low/out of them, they would not be crunching, and that would piss them off, also.

Randomization is the best.


My movie https://vimeo.com/manage/videos/502242
ID: 339212 · Report as offensive
Profile Jim-R.
Volunteer tester
Avatar

Send message
Joined: 7 Feb 06
Posts: 1494
Credit: 194,148
RAC: 0
United States
Message 339232 - Posted: 16 Jun 2006, 13:09:33 UTC

I disagree with your premise. I am running a 500mhz p3 attached to three projects, Einstein, Seti and Seti Beta. Plus I use it every day for my own programs. Even playing graphics intensive action games. I have yet to have it go into EDF mode once, let alone miss a deadline. So unless you are speaking of "antique" 486's or 66mhz p1's your premise that older computers can't crunch for SETI just doesn't hold water. It's true that you can't load them down with 10 or 12 projects like you can a newer faster computer, but if you have any type of computer that is capable of running *any* fairly modern software package it can crunch for at least one project without any problems. This has been tested in Beta for computers as slow as (I think) a 133mhz p1 without any problems. Again your statement about the slower computer missing the longer deadline while not missing the short one doesn't hold water either as I've said before. The deadline is based on the estimated completion time of a wu at a particular AR. If a computer is going to miss one deadline it will miss *all* of them. Conversely if it makes *one* deadline, then it is capable of making *all* deadlines.
Jim

Some people plan their life out and look back at the wealth they've had.
Others live life day by day and look back at the wealth of experiences and enjoyment they've had.
ID: 339232 · Report as offensive
EricVonDaniken

Send message
Joined: 17 Apr 04
Posts: 177
Credit: 67,881
RAC: 0
United States
Message 339357 - Posted: 16 Jun 2006, 15:02:38 UTC - in response to Message 339212.  
Last modified: 16 Jun 2006, 15:03:14 UTC

The issue with not randomly giving people work is that the people with faster machines getting the longer WUs will make some of them leave (it's been already discussed by several). People want will move to shorter WU projects.

I also believe that if this happened, people will write their own BOINC application to fake the system into thinking they have a slower machine, by skewing the benchmarks, and sending fake processor IDs to the project just to get the shorter WUs.

If people take all the shorter WUs, then they run low/out of them, they would not be crunching, and that would piss them off, also.

Randomization is the best.

Actually, what your post seems to me is an eloquent argument in favor of
a= shorter tasks in general, and
b= more uniformly sized tasks
Fix both problems and not only do we not need to have this discussion, but a few others issues will be addressed as well.

As for the cheating issue(s),
a= it seems silly to assume anyone would put that much effort in for so little reward. After all, if I'm going to expend effort being unethical I want to get something tangibly valuable out of it.
b= cheating in the manner you suggest is trivially easy to detect and correct for.

ID: 339357 · Report as offensive
Profile Pooh Bear 27
Volunteer tester
Avatar

Send message
Joined: 14 Jul 03
Posts: 3224
Credit: 4,603,826
RAC: 0
United States
Message 339371 - Posted: 16 Jun 2006, 15:13:40 UTC - in response to Message 339357.  
Last modified: 16 Jun 2006, 15:16:54 UTC

Actually, what your post seems to me is an eloquent argument in favor of
a= shorter tasks in general, and
b= more uniformly sized tasks
Fix both problems and not only do we not need to have this discussion, but a few others issues will be addressed as well.

The results you download are all the same size. It's how each result is different in an angle range, so there is no making them all the same time length, with the enhanced research.

Short tasks, long tasks, it's all science, if you do not babysit your machines all day (as is the real way the software is meant to be used), you'd never noticed the length of time to crunch. My machines sit and work, and I look at them maybe once every 2-3 days just to make sure they are still alive. Babysitting takes too much of my time, and computers do not need it.

As for the cheating issue(s),
a= it seems silly to assume anyone would put that much effort in for so little reward. After all, if I'm going to expend effort being unethical I want to get something tangibly valuable out of it.
b= cheating in the manner you suggest is trivially easy to detect and correct for.

Sounds like you were not around for Classic days. People do what they can to cheat. It ran rampant in Classic. Because of the lack of ability to cheat in BOINC, many of those cheaters left the project. If you give a way to cheat again back, they will come back.


My movie https://vimeo.com/manage/videos/502242
ID: 339371 · Report as offensive
Pepperammi

Send message
Joined: 3 Apr 99
Posts: 200
Credit: 737,775
RAC: 0
United Kingdom
Message 339376 - Posted: 16 Jun 2006, 15:16:06 UTC - in response to Message 339232.  
Last modified: 16 Jun 2006, 15:19:52 UTC

I am running a 500mhz p3 .... playing graphics intensive action games.

?

sorry that just tickled me a little.

Maybe you've forgetten that part of what was suppose to be good about enhanced is that it is suppose to be able to let older computer join in that couldn't before because the crunch times were just too long but the optimizations meant they could. Remember the number of posts from people thanking the optimizers when the first opti apps came out becasue they could now piece together some of there anchient hardware and use it to a good cause.

Enhanced is suppose to already have a system of sorts that allocates smaller units to slower machines but doesn't work. Based on how much work you request but you could request 20min of work and still get a couple of estimated 5 hours units.

As for some people opting for smaller units just for credit- well that wouldn't be a problem if/when/if they get the credit system leveled. supposee to get more credit for long amounts of work and less for small but it doesn't add up. at the moment.
If/when/if they get it sorted to say for example two 1hour units claim equal to a single 2hour unit then there wouldn't be people who would do that.
ID: 339376 · Report as offensive
EricVonDaniken

Send message
Joined: 17 Apr 04
Posts: 177
Credit: 67,881
RAC: 0
United States
Message 339406 - Posted: 16 Jun 2006, 15:40:25 UTC - in response to Message 339371.  


The results you download are all the same size. It's how each result is different in an angle range, so there is no making them all the same time length, with the enhanced research.

You are being a bit disingenuous. You know very well that my comment implied that tasks should take as close to the same estimated CPU time as possible compared to one another. Clearly file size is not being discussed here.


Short tasks, long tasks, it's all science, if you do not babysit your machines all day (as is the real way the software is meant to be used), you'd never noticed the length of time to crunch. My machines sit and work, and I look at them maybe once every 2-3 days just to make sure they are still alive. Babysitting takes too much of my time, and computers do not need it.

Thanks to tasks hanging, ignoring a host for 2-3 days is presently not a reasonable option. Nor does everyone have the same usage pattern you do.

The wider the range of usage styles and host characteristeristics We can cater to, the more crunchers there will be.


Sounds like you were not around for Classic days. People do what they can to cheat. It ran rampant in Classic. Because of the lack of ability to cheat in BOINC, many of those cheaters left the project. If you give a way to cheat again back, they will come back.

I was and I remember. I also know just how much harder it is to cheat now compared to then, and especially how much harder it is to cheat in an undetectable way.
There is also the question as to why people would want to cheat.
Make it hard to cheat and address the underlying issues as to why people would want to and you get both more crunchers and more honestly acting crunchers.
ID: 339406 · Report as offensive
Profile Clyde C. Phillips, III

Send message
Joined: 2 Aug 00
Posts: 1851
Credit: 5,955,047
RAC: 0
United States
Message 339513 - Posted: 16 Jun 2006, 18:46:53 UTC

One thing that's putting crunchers in favor of short units is that those units yield cobblestones at a higher rate (per hour) than the long ones. This shouldn't be.
ID: 339513 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19849
Credit: 40,757,560
RAC: 67
United Kingdom
Message 339518 - Posted: 16 Jun 2006, 18:52:46 UTC - in response to Message 339513.  

One thing that's putting crunchers in favor of short units is that those units yield cobblestones at a higher rate (per hour) than the long ones. This shouldn't be.

Although that is true, it actually depends more on the computer system, it was noted some time ago in the Beta trial that Intels with a large L2 cache perform much better at high and very high AR, the short units.

Andy
ID: 339518 · Report as offensive
EricVonDaniken

Send message
Joined: 17 Apr 04
Posts: 177
Credit: 67,881
RAC: 0
United States
Message 339579 - Posted: 16 Jun 2006, 20:06:26 UTC - in response to Message 339518.  
Last modified: 16 Jun 2006, 20:08:30 UTC

One thing that's putting crunchers in favor of short units is that those units yield cobblestones at a higher rate (per hour) than the long ones. This shouldn't be.

Although that is true, it actually depends more on the computer system, it was noted some time ago in the Beta trial that Intels with a large L2 cache perform much better at high and very high AR, the short units.

Andy

Can you please provide pointers to those discussions?

Among other things, I'm wondering if there's a significant performance difference between the Intel cores as L1 cache varies from 8KB to 16KB to 32KB or if the effect is simply tied to the last and biggest cache in the hierarchy.

Also, I note in passing that it is getting fairly important to figure out a way to put optional SSE/SSE2/SSE3 support based on CPU detection into the standard BOINC and seti clients given the =huge= performance improvement avaliable w/ the newest implementations of SSE coming from both AMD and Intel.

My Core Duo is no slouch when running a SSE2/SSE3 optimized client, and Intel's new Core2's are supposed to be 4x faster at SSE computation at the same clock rate, have faster clock rates available, and be 64b to boot (which means 2x as many program visible registers at the least for a 20-40% performance increase just based on having more registers available).

We could not have lost SSE optimized apps or the push for 64b clients at a worse time...
ID: 339579 · Report as offensive
Profile Rom Walton (BOINC)
Volunteer tester
Avatar

Send message
Joined: 28 Apr 00
Posts: 579
Credit: 130,733
RAC: 0
United States
Message 339634 - Posted: 16 Jun 2006, 21:14:41 UTC - in response to Message 339579.  
Last modified: 16 Jun 2006, 21:15:22 UTC


Also, I note in passing that it is getting fairly important to figure out a way to put optional SSE/SSE2/SSE3 support based on CPU detection into the standard BOINC and seti clients given the =huge= performance improvement avaliable w/ the newest implementations of SSE coming from both AMD and Intel.


Progress has been made on that front. Starting with 5.5.1 on Windows and Linux basic CPU capability detection is in place.

Here is an example from my machine:
6/16/2006 3:35:26 AM||Starting BOINC client version 5.5.1 for windows_intelx86
6/16/2006 3:35:26 AM||libcurl/7.15.3 OpenSSL/0.9.8a zlib/1.2.3
6/16/2006 3:35:26 AM||Data directory: C:\\Program Files\\BOINC
6/16/2006 3:35:27 AM||Processor: GenuineIntel Intel(R) Xeon(TM) CPU 3.06GHz
6/16/2006 3:35:27 AM||Processor count: 4
6/16/2006 3:35:27 AM||Processor capabilities: fpu tsc sse sse2 mmx
6/16/2006 3:35:27 AM||Memory: 2.00 GB physical, 3.85 GB virtual
6/16/2006 3:35:27 AM||Disk: 223.57 GB total, 96.47 GB free
6/16/2006 3:35:27 AM||Version change (5.5.0 -> 5.5.1); running CPU benchmarks


I still need to add the code for the Mac.

This information is passed to both the science applications and the project servers.

----- Rom
BOINC Development Team, U.C. Berkeley
My Blog
ID: 339634 · Report as offensive
DarkStar
Volunteer tester
Avatar

Send message
Joined: 13 Jun 99
Posts: 119
Credit: 808,179
RAC: 0
Marshall Islands
Message 339981 - Posted: 17 Jun 2006, 4:00:49 UTC - in response to Message 339357.  

quoting EricVonDaniken:
quoting Pooh Bear 27:The issue with not randomly giving people work is that the people with faster machines getting the longer WUs will make some of them leave (it's been already discussed by several). People want will move to shorter WU projects.

I also believe that if this happened, people will write their own BOINC application to fake the system into thinking they have a slower machine, by skewing the benchmarks, and sending fake processor IDs to the project just to get the shorter WUs.

If people take all the shorter WUs, then they run low/out of them, they would not be crunching, and that would piss them off, also.

Randomization is the best.

Actually, what your post seems to me is an eloquent argument in favor of
a= shorter tasks in general, and
b= more uniformly sized tasks
Fix both problems and not only do we not need to have this discussion, but a few others issues will be addressed as well.

As for the cheating issue(s),
a= it seems silly to assume anyone would put that much effort in for so little reward. After all, if I'm going to expend effort being unethical I want to get something tangibly valuable out of it.
b= cheating in the manner you suggest is trivially easy to detect and correct for.

Excuse me, but I don't see anywhere PB27 referred to "cheating", and I strongly disagree that any user choosing what work they do or don't want to process on their own machine is "cheating".

If I were to decide that I only wanted to process work where the third character in the ID is between "3" and "5", and wanted to babysit my machine as it downloaded work units and cancel those that didn't fit my criteria, I think that's my prerogative. If I decided I wanted to (and had the talent to) develop a modified client that would do that for me automatically, I think that's still my prerogative. Granted, someone else's credit might get delayed until my system reports the results as aborted and they get resent - more so with the longer work unit times - but at worst that's a minor incovenience that might justify my being considered "inconsiderate", but cheating? I hardly think so.

In fact, as long as the results I return are scientifically valid, I don't really think it's anyone's business other than my own as to how those results are being calculated, or which ones I choose (or choose not to) calculate.

Now, were I "faking" processing them and returning invalid results that were seen as valid, or returning duplicate results over and over again - then would I be cheating. Just because someone's doing something you don't like or agree with does not mean that they're "cheating".

IMHO, we all need to get this "c" word out of our collective vocabularies, and quit tossing it around quite so handily.
ID: 339981 · Report as offensive
EricVonDaniken

Send message
Joined: 17 Apr 04
Posts: 177
Credit: 67,881
RAC: 0
United States
Message 340080 - Posted: 17 Jun 2006, 7:00:09 UTC
Last modified: 17 Jun 2006, 7:01:03 UTC

"DarkStar" wrote:

Excuse me, but I don't see anywhere PB27 referred to "cheating", and I strongly disagree that any user choosing what work they do or don't want to process on their own machine is "cheating".

Your examples were not those of cheating.

PB's =were=.

If you lie about what work you have done or how long it took for you to do it, you hurt the entire project.

If you "hoard" easier tasks to boost your RAC / standing / whatever, you hurt every cruncher who plays "fair", and therefore the entire community, and therefore the entire project.

As PB noted, cheating was rampant at certain times during the s@hc days. It held back the science. It reduced legitimate participation. It was Bad (tm).

"everyone has the Right to be different as long as they do not adversely affect anyone else's Right to be different".
If you cross that line, you are doing Something Wrong.

When that Something Wrong involves the credit system that is used as the primary non-scientific motivator for this game we call "BOINC" or "seti",
then it is rightfully called cheating because it violates the "rules" we have agreed to play by.

Anyone involved with Ars Technica should be educated and smart enough to grok.
ID: 340080 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 340430 - Posted: 17 Jun 2006, 15:49:51 UTC - in response to Message 339981.  


Excuse me, but I don't see anywhere PB27 referred to "cheating", and I strongly disagree that any user choosing what work they do or don't want to process on their own machine is "cheating".

Sadly, cheating was rampant in Classic, and much of the BOINC design is meant to grant credit only to scientifically valid work while minimizing cheating.

... and given the opportunity, some people will choose work to maximize their credits.

We're now counting FLOPS -- Floating Point OperationS. A floating point "add" counts as one FLOP, but happens relatively quickly. A floating point "SIN" counts as one FLOP but takes quite a bit longer. Trying to pick work units with more adds and less sins will maximize your credit -- but that isn't exactly playing fair.
ID: 340430 · Report as offensive
EricVonDaniken

Send message
Joined: 17 Apr 04
Posts: 177
Credit: 67,881
RAC: 0
United States
Message 340457 - Posted: 17 Jun 2006, 16:45:58 UTC - in response to Message 340430.  
Last modified: 17 Jun 2006, 16:47:39 UTC


Excuse me, but I don't see anywhere PB27 referred to "cheating", and I strongly disagree that any user choosing what work they do or don't want to process on their own machine is "cheating".

Sadly, cheating was rampant in Classic, and much of the BOINC design is meant to grant credit only to scientifically valid work while minimizing cheating.

... and given the opportunity, some people will choose work to maximize their credits.

We're now counting FLOPS -- Floating Point OperationS. A floating point "add" counts as one FLOP, but happens relatively quickly. A floating point "SIN" counts as one FLOP but takes quite a bit longer. Trying to pick work units with more adds and less sins will maximize your credit -- but that isn't exactly playing fair.

Using FLOPS is better than what we did back in the s@hc days, but in reality it is still a fairly poor load estimator for exactly the reasons you have noted.
The (super)computer industry went through this same problem from the mid '70's to the late '80's. FLOPS were abandoned in favor of synthetic benchmarks based on profiled instances of the apps to be run (SPEC).
This a known and proven solution that has worked well in practice for ~2 decades.
BOINC should be doing the same thing.

Then tasks could be made such that they are far more likely to run in ~ the same amount of time on a specific host than the use of FLOPS allows.


ID: 340457 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 340466 - Posted: 17 Jun 2006, 16:53:33 UTC - in response to Message 340457.  
Last modified: 17 Jun 2006, 16:53:46 UTC



We're now counting FLOPS -- Floating Point OperationS. A floating point "add" counts as one FLOP, but happens relatively quickly. A floating point "SIN" counts as one FLOP but takes quite a bit longer. Trying to pick work units with more adds and less sins will maximize your credit -- but that isn't exactly playing fair.

Using FLOPS is better than what we did back in the s@hc days, but in reality it is still a fairly poor load estimator for exactly the reasons you have noted.
The (super)computer industry went through this same problem from the mid '70's to the late '80's. FLOPS were abandoned in favor of synthetic benchmarks based on profiled instances of the apps to be run (SPEC).
This a known and proven solution that has worked well in practice for ~2 decades.
BOINC should be doing the same thing.

Then tasks could be made such that they are far more likely to run in ~ the same amount of time on a specific host than the use of FLOPS allows.


No matter what we try, the results are not going to be perfect.

FLOPs are not linear across all work units because some ranges use more of the "slow" ops while others use more of the faster ops.

Benchmarks aren't accurate because the benchmark doesn't always measure all of the relevant parameters. You can find threads that praise the Pentium M because it appears that the cache (at 2M) is big enough to hold a significant part of a WU -- but the benchmark runs entirely in the cache on almost all chips.

Basically, what we have here is a situation where there is no perfect answer, and no good deed will go unpunished.

Measuring actual work is probably about as good as can be done, and it does kill the whole "why did I get cheated, I claimed 60 and was only granted 20" question.
ID: 340466 · Report as offensive
1 · 2 · Next

Message boards : Number crunching : Suggestion to make s@hE more efficient and more user friendly


 
©2025 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.