Message boards :
Number crunching :
WU Inconclusive...Why?
Message board moderation
Author | Message |
---|---|
cov_route Send message Joined: 13 Sep 12 Posts: 342 Credit: 10,270,618 RAC: 0 |
http://setiathome.berkeley.edu/workunit.php?wuid=1109451566 My result: Spike count: 0 Pulse count: 0 Triplet count: 2 Gaussian count: 0 Wingman's result: Spike count: 0 Pulse count: 0 Triplet count: 2 Gaussian count: 0 What else feeds into the decision logic? |
skildude Send message Joined: 4 Oct 00 Posts: 9541 Credit: 50,759,529 RAC: 60 |
Flopcounter: 55876472160744.562000 notice a difference In a rich man's house there is no place to spit but his face. Diogenes Of Sinope |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
What else feeds into the decision logic? With the various kinds of signals there are a number of parameters checked/compared server side. In the case of those triplets there are known inaccuracies in both the CPU 6.03 application (mostly normalisation summing noise) and in the 6.10 application there is some chirp inaccuracy. This can mean occasional variation in detected signal characteristics, which 'usually' in most cases doesn't matter, or cause any problems / disagreement, as the difference is usually small. However, as the project uses discrete thresholds without hysteresis or confidence intervals for detection, when the signals are around threshold, one or another application can show detected signals the other doesn't. In addition, there are 'best' signals not displayed in stderr output. Technologically speaking both those applications are dated now, and substantial refinement work has gone into converging CPU & GPU applications leading up to V7 multibeam. As floating point arithmetic is not dealing with exact representations, and Fourier analysis in particular has accuracy limitations that vary by implementation in the third party (FFT) libraries used, bit-level agreement is not a practical requirement for seti@home, also in part due to the data source being noise. Consider that as where the validation mechanism comes in, as Boinc doing 'its job'. This all means for the purpose of seti@home firstly that however much application refinement is applied, there will always be *some* variation between applications especially cross-platform, and secondly that the nature of the data source imposes real restrictions on how much accuracy can be reasonably maintained (number of significant digits in the numeracy sense). Jason "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
http://setiathome.berkeley.edu/workunit.php?wuid=1109451566 For that WU, the validator is also comparing one best_spike, one best_pulse, one best_triplet, and one best_gaussian from each host. The best_triplet has to be identical to one of the reported triplets, though the hosts might have disagreed on which of the two was best. Very tiny calculation differences can swing the choice, and also for the other three "best" signals. Joe |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.