Status and plans


Nebula meets its performance goals. Copying files to Atlas takes a couple of days, but the other steps take less than a day in total.

As of this writing we're still in the iterative process of scoring 100K or so pixels, looking at the results, and tweaking the RFI algorithms. The average time to score a pixel is less than a second, so using 1,000 Atlas compute nodes we'll be able to score all 16M pixels in ~16,000 seconds, or ~5 hours.

I've developed web interfaces for browing the top-scoring multiplets and pixels, for drilling down to the component signals, and for displaying waterfall plots of the signals in their time/frequency neighborhood. This gives us a way of spotting obvious RFI.

Long-term plans

Eric and I have discussed various changes to SETI@home, which will affect Nebula.

Multi-beam client

Currently the input to the SETI@home client is a file of data from a single telescope beam. We plan to change this so that the input file contains data from all 14 beams. This will make it possible to do multi-beam RFI removal in the client.

The goal is not to offload work to the client, but rather to have the client return non-RFI signals that would otherwise be drowned out by RFI. In the current client, if the number of signals above threshold exceeds a limit (typically because of RFI), the analysis is stopped.

New data sources

There have been proposals to combine the data from other radio SETI projects into a unified framework:

The idea is to look for persistent signals in the union of the data. It's likely that parts of Nebula could be used for this.

Flat-file back end

Eventually we'd like to move away from a local SQL database completely. One approach is to store signals in daily flat files. We can then incrementally move these to Atlas,

©2019 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.