Validation inconclusive issue

Message boards : Number crunching : Validation inconclusive issue
Message board moderation

To post messages, you must log in.

AuthorMessage
James W

Send message
Joined: 26 May 12
Posts: 51
Credit: 4,956,027
RAC: 13
United States
Message 1493200 - Posted: 22 Mar 2014, 7:22:36 UTC

Re: Workunit 1455523575

Just curious as to why my results and those of my "wing mate" are exactly the same, and yet validation is inconclusive. I thought this was only the case if something was different in 1 or more results.
ID: 1493200 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 1493450 - Posted: 22 Mar 2014, 17:23:25 UTC - in response to Message 1493200.  

Re: Workunit 1455523575

Just curious as to why my results and those of my "wing mate" are exactly the same, and yet validation is inconclusive. I thought this was only the case if something was different in 1 or more results.

The Validator looks at the uploaded result file, so just having the counts identical in the stderr doesn't really mean anything. But in this case there is a good indication of the issue, your stderr includes:

Best autocorr: peak=18172.38, time=73.82, delay=4.0545, d_freq=1420887655.88, chirp=-13.763, fft_len=128k

That's essentially an impossibly strong autocorrelation, probably indicating a data corruption. It was found during the part of your crunching which began with a restart at 16.68%, the next restart was at 19.54% and signals after that point look sensible. Since a restart involves rereading the WU file and reinitializing everything, it isn't surprising the data corruption went away. So my guess is you'll get credit based on a "weakly similar" comparison to the canonical result when that is found.

I also see another impossible autocorr in your task 3424060786. That's inconclusive too, but again there's a restart soon after that bad autocorr was found and you'll probably get credit.
                                                                  Joe
ID: 1493450 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 1493764 - Posted: 23 Mar 2014, 0:14:41 UTC - in response to Message 1493200.  

Re: Workunit 1455523575

Just curious as to why my results and those of my "wing mate" are exactly the same, and yet validation is inconclusive. I thought this was only the case if something was different in 1 or more results.

I fell off the path to that inconclusive in my earlier comments, it was your task for WU 1455888451 where I found the first impossible autocorr.

For 1455523575 it is impossible to tell what doesn't match because neither of the two applications used provides any detail about signals. But from the reported counts we can deduce a little info. Given one reported Autocorr, Pulse, and Triplet those same reported signals would also be in the uploaded result file as the best_autocorr, best_pulse, and best_triplet. In addition there would be a best_spike and a best_gaussian. So the Validator would be working with a result file containing 8 signals to compare, though only 5 unique ones. Because both hosts reported the same counts it is unlikely the Autocorr, Pulse, or Triplet signals don't match. So my guess would be either the best_spike or best_gaussian differ. Those are "best" in the sense they came closest to being reportable, but "best" doesn't imply "good", and tiny differences in calculation can easily cause the choice between two poor signals to go either way.
                                                                  Joe
ID: 1493764 · Report as offensive

Message boards : Number crunching : Validation inconclusive issue


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.