More SERENDIP work

Message boards : Nebula : More SERENDIP work
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile David Anderson
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 13 Feb 99
Posts: 139
Credit: 502,653
RAC: 0
Message 1942923 - Posted: 6 Jul 2018, 20:15:51 UTC

SETI@home/Nebula progress is currently blocked on Eric Korpela's efforts to generate gaussians (as well as spikes) for birdies. This will hopefully let Nebula detect more birdies, demonstrating the value of looking for gaussians.

In the meantime, I returned to working on Nebula for SERENDIP 6 (hereafter S6). S6 signals, which are detected by an on-site GPU-based system rather than volunteer computing, are essentially spikes with a fixed FFT length and chirp rate. However, the two data sources have different parameters - frequency range, time range, sampling rate, etc. Originally these were compile-time constants, so I needed to have separate Nebula executables for S@h and S6. I eliminated this duplication by setting the parameters at runtime.

There were lots of changes and it took a while to get the pipeline working again for S6. Then I noticed that RFI removal was extremely slow. This was because S6 data has much larger clusters of signals (in small time/frequency areas) than does S@h. The R-Trees I was using to store and access signals were bogging down.

So I restructured the RFI removal code to a) reduce the size of R-Trees, e.g. minimize the time window that we store in them; and b) use more efficient data structures (e.g. STL deques) where possible.

This was a mammoth effort. The results were slightly underwhelming - a speedup of maybe 2X but not 10X - but still worth doing.

After that - in the last couple of weeks - I returned to S@h and ran the pipeline, including the new RFI removal, just to make sure everything still worked. It didn't. RFI removal crashed several hours into the processing, which what looked like memory corruption. A nightmare debugging scenario.

But I whittled away at it, and eventually found the problem: a number of spikes had a beam number of -3. Valid beam numbers are 0 to 13. The -3 caused an out-of-bounds array reference. So I added a beam number check to the data cleansing program ("filter") and things work now.

I'm rerunning the pipeline was we speak and hope to have new results on the web in the next couple of days.
ID: 1942923 · Report as offensive

Message boards : Nebula : More SERENDIP work


 
©2020 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.