Message boards :
Number crunching :
Question about a task that needed a tie-breaker;; all tasks validated
Message board moderation
Author | Message |
---|---|
JLDun Send message Joined: 21 Apr 06 Posts: 573 Credit: 196,101 RAC: 0 |
WU 2220159787: the first two tasks split ended up disagreeing (which happens, not worried about that part...). Mine returned= Spike count: 8 Wingman results= Spike count: 8 The tie-breaker says: Spike count: 8 And all three are counted valid. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Valid status, and credit, are awarded as long as 50% of the signals (in the result file, not stderr text) match strongly similar. In this case the comparison of detail in the result files amounts to 8 spikes, 3 triplets, and 5 'best' signals (some of which likely repeats). Yours is first on the list, and was chosen the 'canonical' result in the quorum, which becomes recorded in the science database, meaning the first wingman (possibly missing triplets) was at least weakly similar for 8 or more of the possible 16 so technically 'valid'. [but the resend was closer to yours] It's not great for project efficiency to have to send out another round, though have to keep in mind few of us have fully ECC memory equipped workstation class gear, with HPC rated compute cards. That means to me even with everything else being perfect, an 'honest glitch' penalising credit would probably be harsh, since the work was processed 'mostly' ok. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
JLDun Send message Joined: 21 Apr 06 Posts: 573 Credit: 196,101 RAC: 0 |
Wasn't aware that the "threshold", as such, was for 50% strong, or that '5 best' came into it... |
bluestar Send message Joined: 5 Sep 12 Posts: 7031 Credit: 2,084,789 RAC: 3 |
Meaning? |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Wasn't aware that the "threshold", as such, was for 50% strong, or that '5 best' came into it... Yeah, caveat is that I haven't examined the current validator code recently, so things can be slightly different to my picture. Probably though the described chain of events is in the ballpark. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Meaning? [First wingman was loosely similar, but not a great match] I took it as just confirmation of what most people don't see. The actual signals in the result file (including bests), and the method of determining similarity, used to decide to reissue tasks or not. There are future challenges to address as project compute power increases, and those issues are more complex than they first appear. [Too many more resissues than really necessary costs project efficiency, so a tolerance is needed. Being too 'strict' costly, when the Boinc mechanism relies on unreliable hosts ---> but we still want 'good' signals] "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
bluestar Send message Joined: 5 Sep 12 Posts: 7031 Credit: 2,084,789 RAC: 3 |
Sigh! |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.