In the late 1990s, together with David Gedye, Dan Werthimer, and Woody Sullivan, I organized the SETI@home project. Our goal was to use volunteer computing - millions of home PCs - as part of a radio SETI project. SETI@home analyzes data from the Arecibo Radio Observatory. The computing power available through volunteer computing lets us do a more sensitive and general search than would otherwise be possible.
Dan and I set up the project at the UC Berkeley Space Sciences Lab. We were soon joined by Eric Korpela, Jeff Cobb, and Matt Lebofsky. This team has remained together, with changing roles, to the present day.
For the first year or two, we focused on the front end of SETI@home. This involves
The analysis of a typical workunit returns a dozen or so signals - blips of power at particular times and frequencies. Since SETI@home started in 1999, we've accumulated about 6 billion signals.
In 2002 our focus shifted to back-end processing. The goal is to take our archive of signals and to identify the signals (or groups of signals) most likely to be from ET. Back-end processing consists of two major steps:
The latter step is also called scoring because it assigns scores (based on statistical probabilities) to multiplets and sky locations. A high-scoring persistent signal isn't necessarily ET; it's more likely noise and/or RFI. To check, we can re-observe that point in the sky. If we detect a signal of the same type and frequency, we'd work with other SETI project to re-observe it with different instruments.
In terms of computing requirements, the front and back ends are very different. The front end is compute intensive - it requires large amounts of computing on small amounts of data. The back end tasks are data intensive - they require fast access to large amounts of data.
©2019 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.