We continually work on several long term projects which improve the power and scientific output of SETI@home. Recent accomplishments on this front as well as future plans are noted below.
After applications return signals they are validated and stored in our master database. The goal of SETI is to find persistent signals - those that appear at the same frequencies and points in the sky, but at different times. The aim of the ntpckr is to identify and (re)score such persistent signals as new results from the distributed clients are received, that is, in near real time. This will allow us to maintain an up-to-date list of our best candidates.
Development is currently very active on the NTPCKR. We are finishing up and debugging the candidate finding code and starting to develop a set of normalized scoring algorithms. We will also be developing web based tools for querying and displaying ntpckr generated data.
The current SETI@home application looks for signals that are narrow in frequency, but have long duration. That's one way that an extraterrestrial civilization can send a signal that stands up above the radio background noise. Another possibility is that they could put a lot of power into a short duration pulsed signal that has a wide bandwidth. As such a pulse travels through interstellar space, interactions with interstellar matter slow down low frequencies relative to high frequencies in a process called dispersion. This dispersion spreads the pulse out over time. If we know how much dispersion a pulse has experienced, we can correct for this effect. For an extraterrestrial signal we won't know how much interstellar matter the signal interacted with on its journey, therefore we have to try every possible dispersion measure. That takes a lot of computing time.
Astropulse is a SETI@home application that uses coherent dedispersion to search for pulsed signals. In addition to extraterrestrial signals we might see signs of evaporating black holes or discover new pulsars.
Since we began using the ALFA receiver at Arecibo we find we are more susceptible to have our data contaminated with military radar noise. Luckily, these radar bursts happen at known periods, and the staff at Arecibo already worked on a "hardware radar blanker," i.e. observational projects can have their data injected with extra bits that denote when radar is expected to be on or not. We implemented use of this hardware signal at the beginning of 2008.
However, in practice, we find the signal provided by Arecibo isn't 100% correct. This is for various reasons, including the military not being too keen on broadcasting changes in their radar patterns. Since the hardware blanker is predictive, it can't be completely trusted when things change. So we are also currently working on a "software radar blanker" which will analyze raw data and use statistical analysis to find radar patterns. This has the side effect of letting us re-analyze pre-2008 data (which didn't have the hardware radar blanker signal) more effectively. In effect, this is another form of RFI rejection. There are actually two concurrent forms of software radar blanking being developed at the time of writing.
One of the big challenges in SETI is to filter out intelligent but earth originating signals. Such false positive signals are, to SETI, Radio Frequency Interference (RFI). Integrating both tried and true RFI rejection algorithms, as well as some novel ideas made possible by our multibeam receiver is the goal of this task. One of the things we will be exploring is whether it is more efficient to filter RFI from all data pre-ntpckr or only filter the candidates post-ntpckr.
The current data recorder has the ability to "step" in frequency when the telescope is tracking a spot on the sky. The idea is that there is a diminishing return from collecting data at the same frequency, at the same sky location, for very long at a time. So, in such a case, the recorder can step through a series of frequencies. That part is done. What is not done are the required client application changes and related frontend and backend changes to properly handle a frequency stepping data set.
Due to our high network bandwidth needs which campus cannot support, we pay a monthly fee for our SETI@home "internet pipe" to an ISP (Hurricane Electric). It should be noted that since we are located on campus they do still play a large part in our network connectivity. However, given their resources and current infrastructure campus can only support us up to a point. In fact, we can only use 100 Mbits/sec of our pipe, even though we are paying for 1000 Mbits/sec (1 Gbit/sec).
This hasn't been a problem or concern for us, especially given our own hardware could barely handle more than 100Mbits, until recently. Our server/router hardware has improved over the past year, Astropulse is creating more bandwidth demands, and Moore's law still applies - we see a steady increase in bandwidth consumption even over long periods when our active user base remains the same.
While we are managing for now, we'd like to get access to more of our paid bandwidth. Campus is working with us already by researching what hardware is necessary for such an upgrade. We are, unfortunately, in a building up a large hill far from most of the campus, so a large portion of our upgrade costs would include dragging a new underground fiber up the slope to our lab. This isn't cheap.
This project doesn't require much of our time. If we get the money for such an endeavor, we'll tell campus to go ahead with the upgrades. If we don't, we'll have to manage with what we got. By the way, we aren't currently "wasting money" - we are paying far less for this Gigabit link than we were for our 100 Mbit link several years ago, and the uptime/service has been far superior.
|Copyright © 2016 University of California|