Nebula meets its performance goals. Copying files to Atlas takes a couple of days, but the other steps take less than a day in total.
As of this writing we're still in the iterative process of scoring 100K or so pixels, looking at the results, and tweaking the RFI algorithms. The average time to score a pixel is less than a second, so using 1,000 Atlas compute nodes we'll be able to score all 16M pixels in ~16,000 seconds, or ~5 hours.
I've developed web interfaces for browing the top-scoring multiplets and pixels, for drilling down to the component signals, and for displaying waterfall plots of the signals in their time/frequency neighborhood. This gives us a way of spotting obvious RFI.
Near-term plans (months? 1 year?) include:
Currently the input to the SETI@home client is a file of data from a single telescope beam. We plan to change this so that the input file contains data from all 14 beams. This will make it possible to do multi-beam RFI removal in the client.
The goal is not to offload work to the client, but rather to have the client return non-RFI signals that would otherwise be drowned out by RFI. In the current client, if the number of signals above threshold exceeds a limit (typically because of RFI), the analysis is stopped.
There have been proposals to combine the data from other radio SETI projects into a unified framework:
Eventually we'd like to move away from a local SQL database completely. One approach is to store signals in daily flat files. We can then incrementally move these to Atlas, and perhaps to a more scalable SQL-like system like Google's Big Table as well.
©2018 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.