Message boards :
Nebula :
Eliminating redundant multiplets
Message board moderation
Author | Message |
---|---|
David Anderson Send message Joined: 13 Feb 99 Posts: 173 Credit: 502,653 RAC: 0 |
Recall the concepts of multiplet and pixel. When looking for multiplets in a given pixel P, we make a list of all the signals in a disc centered at P. This disc typically overlaps 8 pixels adjacent to P. Suppose P2 is adjacent to P and there are some signals near their common boundary. These signals will be in the discs for both P and P2. If the signals contain a multiplet, we'll find that multiplet in both P and P2. Ideally, a signal should occur in at most one multiplet. Otherwise we'd get large numbers of multiplets that differ only slightly in terms of both signals and score. This would make it infeasible to manually inspect top-scoring multiplets, and would reduce the significance of score rank: the top 1000 multiplets would be mostly a lot of duplicates. The "one multiplet per signal" limit is enforced in the processing of each pixel. But this processing is done separately and in parallel for pixels; we can't see the results of adjacent pixels. So to enforce the limit across pixels, we need an additional processing step, which is done after all pixels have been processed. I wrote a program to do this: mp_unique.cpp. For a given pixel P, the program scans the multiplet lists of adjacent pixels. If a multiplet M in P contains a signal that's in a higher-scoring multiplet of an adjacent pixel, M is removed from P's multiplet list. This program is pretty efficient. For now I'm doing it sequentially over all pixels. For 100K pixels it took 30 minutes. It could be sped up by parallelizing it, either on a single multiprocessor node, or using Condor. The data set currently on the web has been processed with mp_unique, but there are still some glitches; I'll have a final version in a day or two. |
IntenseGuy Send message Joined: 25 Sep 00 Posts: 190 Credit: 23,498,825 RAC: 9 |
I dont understand a word, but hope you get it working!! SETI@home classic workunits 103,576 SETI@home classic CPU time 655,753 hours |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.