Message boards :
Number crunching :
@ Astropulse Rumor - Reality
Message board moderation
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 11 · Next
Author | Message |
---|---|
PhonAcq Send message Joined: 14 Apr 01 Posts: 1656 Credit: 30,658,217 RAC: 1 |
only moderately off topic, but is there a way to use the existing science clients for astropulse by redefining the wu and redesigning the splitter? |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
Have noticed over the past several days that on reboot, this wu keeps resetting to 0, with the time to completion staying at 02:03, yet each time before the reboot it had already completed 4 hours or more. Is this normal? Other wu's no problem. No, it's not normal. It indicates the checkpoint file didn't contain the information needed to continue from the previous state. That's a setiathome_enhanced WU, not AstroPulse. Both seem to occasionally create an empty checkpoint file, perhaps Josh will be able to find the cause in AstroPulse and we can get it backported to setiathome_enhanced. Joe |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
only moderately off topic, but is there a way to use the existing science clients for astropulse by redefining the wu and redesigning the splitter? It's impractical. Dedispersion is the key AstroPulse method, and that doesn't exist in the setiathome_enhanced science application. Adding that and single pulse detection would be theoretically possible, but the old code base has already gone through so many modifications by so many programmers that it is best to start with a fresh code base for AstroPulse. Joe |
PhonAcq Send message Joined: 14 Apr 01 Posts: 1656 Credit: 30,658,217 RAC: 1 |
only moderately off topic, but is there a way to use the existing science clients for astropulse by redefining the wu and redesigning the splitter? Do you suspect the old code is fundamentally flawed? Or is it adding these other features would be difficult due to the existing structure of the code. (In the latter case, I am reminded of the spaghetti code I used to work on; but that was officially made extinct in this ObjOriented C++ world, right?) |
Voyager Send message Joined: 2 Nov 99 Posts: 602 Credit: 3,264,813 RAC: 0 |
Boinc just stopped completely.uninstalled ,reinstalled, attached and message from server says no work, already have daily limit, but I have no tasks. I guess I'll wait til tomorrow and see if I get more work. |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Boinc just stopped completely.uninstalled ,reinstalled, attached and message from server says no work, already have daily limit, but I have no tasks. I guess I'll wait til tomorrow and see if I get more work. You should be able to get more work after midnight Berkeley time, when the new day starts for the servers...... Have run into this a few times when the Frozen Penny had a crash and dumped it's whole cache, and then came up looking for more work after having already been issued it's daily quota.... "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Sirius B Send message Joined: 26 Dec 00 Posts: 24879 Credit: 3,081,182 RAC: 7 |
Have noticed over the past several days that on reboot, this wu keeps resetting to 0, with the time to completion staying at 02:03, yet each time before the reboot it had already completed 4 hours or more. Is this normal? Other wu's no problem. Thanks Joe. Sorry I should have checked. The Astropulse wu is crunching along nicely, 92 hrs completed with 5 to go. The enhanced wu's deadline is the 6th, shall I let it continue? |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
only moderately off topic, but is there a way to use the existing science clients for astropulse by redefining the wu and redesigning the splitter? The old code base started as C code in 1997 or so. It's not fundamentally flawed, and it works very well as you know, but the conversion to C++ was primarily changing the file extensions and reconciling the few details where C++ interprets things differently. Some recent additions are purer C++ but overall it isn't basically object oriented. I'm glad, procedural code is almost certainly faster although it's harder to maintain. There's another reason that having separate applications is desirable. Setiathome_enhanced allows systems with BOINC Whetstone benchmarks down to about 40 WMips, AstroPulse would need about a ten month deadline if such old systems were going to be supported. I expect that would cause a lot of grief among those who demand prompt credit granting. Joe |
PhonAcq Send message Joined: 14 Apr 01 Posts: 1656 Credit: 30,658,217 RAC: 1 |
So will 1) astropulse run in parallel with the 'old' format, or will I have to finally recycle my old machines? 2) astropulse initiate a new paradigm for credit granting? Perhaps, we could run with a quorem of one in astropulse, and then re-issue any wu's revealing strong signals. That is, because we 'know' that most wu's will not have the desired ET response, we could ignore any false negatives and accept each result as they occure without confirmation. The 'positives' could be double checked later for 'false-positives'. This would have the effect of doubling our processing throughput. (Of course, cheaters will loose their credits if earned improperly.) Maybe this has been discussed to death before; I don't know. |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
So will Astropulse will run along side Multibeam just fine. Multibeam work units are tagged for the multibeam application, and Astropulse workunits are tagged for the Astropulse application. Astropulse will not initiate a new paradigm for credit. The existing paradigm works fine. The quorum should be two in Astropulse just like Multibeam because it is there to "weed out" incorrect or erroneous results. If two computers crunch a WU and one says "we've got signal!" and the other one says "all quiet" then there would have been a 50/50 chance of missing something. |
PhonAcq Send message Joined: 14 Apr 01 Posts: 1656 Credit: 30,658,217 RAC: 1 |
Where is the flaw in my reasoning: We expect from first principles that an overwhelming number (many orders of magnitude) of wu's to have no ET signal; i.e. space isn't filled with ET's. So double checking each wu is highly wasteful. It is also evident that most clients do not report incorrect results. That number is down well below 1% I guess; most of the client errors seem to be due to bad downloads or some sort of local issue that I suspect is easily identified by scanning the returned data file. Taken together, it seems we are wasting our time and resource by double checking each wu (using a N=2 quorem). N=1 seems more appropriate. It is possible that if we don't double check as we doing now, then we would miss the false negatives. In this case, we would have to rely on wu's from nearby points and any oversampling that occurs routinely at Aricebo to catch these outliers. Also, if ET only transmitted once, then it is not likely to be ET at all. We are looking for a location that has a more-or-less continuous transmission-- like "I Love Lucy" reruns from planet X. On the other hand, any ET signal detected, of course, would need to be at least double checked. So in this case re-issuing wu's for additional analysis would be automatic. So that is my case for N=1 computing. Where is the flaw? To be gained is an immediate 2x in computing power, which seems to be 'needed' for the AstroPulse phase of the project. |
W-K 666 Send message Joined: 18 May 99 Posts: 19078 Credit: 40,757,560 RAC: 67 |
Much as I would like to agree with you, I think that not validating all results would leave the way open to the dreaded C word. People would get what they claimed, unless the system is changed. (think old third party time * benchmark clients still available on peoples HDD's) The computing power will come, if you look at the SETI@home Data Distribution History page at the bottom the figures say that we have done half as many MB units (567808) as Classic units (1168745) in a eighth of the time, days on MB 158, days on Classic 1256. When I started on Classic units took 20+ hrs (P3 450MHz), AP is not going to be that much different. |
ML1 Send message Joined: 25 Nov 01 Posts: 20331 Credit: 7,508,002 RAC: 20 |
... So that is my case for N=1 computing. Where is the flaw? To be gained is an immediate 2x in computing power, which seems to be 'needed' for the AstroPulse phase of the project. Good question and suggestion. However... Speed freeeaaks cannot be trusted. N=1 would incur cheating as was done on s@h classic. Or you will get garbage results due to unchecked OC-ing. Also remember that this is a scientific search that must survive peer review. N=2 and above helps to maintain the credibility of the search. Otherwise, the results just cannot be trusted... Happy crunchin', Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
PhonAcq Send message Joined: 14 Apr 01 Posts: 1656 Credit: 30,658,217 RAC: 1 |
Re. peer review: The scientific thesis would be that we 'found' ET. To pass peer review, we would have good data for that: lots of confirmed positive results. My N=1 suggestion is consistent with that because I would expect lots of double checking, reobservation, etc. (If we wanted to defend a thesis that ET does not exist, then we need to check and double check and triple check the entire universe. My quad ain't up to that!) Re. cheaters: It seems to me that this is analagous to internet viruses, etc. and probably goes on now without much fanfare. There are some bandaids available: severe credit loss if a cheater is exposed; credit claiming inconsistent with others with similar wu's (AR's etc.); and so on. There is nothing to prevent us from dis-crediting credit if found to be bogus. Re. OC'ers: Garbage results indicating a negative result will be ignored. Garbage results indicating a (false) positive would presumably be confirmed through the double-check process and exposed as such. I don't see them to be a problem until their 'malfunctioning computers' dominate the contributions. |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 65764 Credit: 55,293,173 RAC: 49 |
Mark What's the RAC on that new 3.6GHz Kitty Mark? My 3.52GHz Q9300 is at 4,986.29 RAC so far and It may still be slowly rising as It's the fastest cpu I have and It's nearest the swamp cooler too. Never mind, I looked It up. :D The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's |
ML1 Send message Joined: 25 Nov 01 Posts: 20331 Credit: 7,508,002 RAC: 20 |
Re. peer review: The scientific thesis would be that we 'found' ET... That would indeed be the case if we were directly looking for something that we already know to be "ET". Part of the problem is that we don't know what we are looking for. Hence the best guess approach as at present. s@h is doing a thorough search for anything that is seen to be 'unnatural' enough that it must be artificial. Lot's of other interesting things may well be (and have been) found along the way. A thorough and reliable look is far better and more useful than a random and unreliable 'glance'. A question of diligence and value? Note also that the design of Boinc requires some sort of validation to be made due to the fact that the volunteer participants are untrusted and their hardware is untrusted. Re. cheaters: It seems to me that this is analagous to internet viruses, Sorry, I don't see the analogy. There is no code hijacking, utilising exploits, or self perpetuation for the virus analogy. etc. and probably goes on now without much fanfare. Any examples? I believe none are known for Boinc. Note that it has been designed with open security in mind. I would expect any compromise of that would be very quickly fixed. There are some bandaids available: severe credit loss if a cheater is exposed; credit claiming inconsistent with others with similar wu's (AR's etc.); and so on. There is nothing to prevent us from dis-crediting credit if found to be bogus. Various miscreants in the past whom have tried to 'bend' the rules a little have been quickly exposed and dealt with. On s@h-classic, some cheaters dominated the results and required cleaning out of the system. In comparison, Boinc is well protected against such attacks. Re. OC'ers: Garbage results indicating a negative result will be ignored. And what if one of those results that were turned to garbage was exactly the signal we were looking for?... Noone else would then get to see that trashed WU. Note that OC-ers will by their very nature gobble up a LOT of WUs. They can do a lot of damage if the OC-ing is marginal and the results are misleading trash. Garbage results indicating a (false) positive would presumably be confirmed through the double-check process and exposed as such. I don't see them to be a problem until their 'malfunctioning computers' dominate the contributions. The OC-ers would soon dominate... If they were in whatever way 'rewarded' for their OC-ing regardless of whether their results were good or misleadingly bad. (Note: OC-ing is a valid pursuit for greater performance provided that the system is fully tested to be proven to remain reliable.) Happy crunchin', Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
Let us assume that we have a work unit that has a very low threshold signal. It's not clearly "ET" but it's something interesting. Let's assume that there exists some threshold, which for the sake of argument is "50" -- don't worry what 50 means, just that it's 50. Floating point values on a modern PC are estimates. They're very good estimates, but they're estimates. SETI crunches a lot of numbers, so errors can accumulate. Given two different CPUs, one crunches for hours and gives a result of 49.999997 and the other crunches and gives a result of 50.000001. If there is a work unit with a "real" signal, and the luck of the draw gives it that 49.999997 machine, we just missed one. If the two don't match, we've got another chance to find ET. The second reason: we've all seen the threads about "why am I down to 1wu/day?" and it is because that machine is known to be producing bad work. Maybe it needs cleaning, but it finishes work units, and gives the wrong result. "bad work" is detected by comparing the results between two machines. If you reduce the quorum to 1, then we don't have two results to compare. |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
Re. peer review: The scientific thesis would be that we 'found' ET. To pass peer review, we would have good data for that: lots of confirmed positive results. My N=1 suggestion is consistent with that because I would expect lots of double checking, reobservation, etc. (If we wanted to defend a thesis that ET does not exist, then we need to check and double check and triple check the entire universe. My quad ain't up to that!) You're right. We're the litmus test -- we're trying to find the signals so they may be confirmed in other ways, including crunching the same work over on known perfect, validated hardware. From a "peer review" standpoint, our role is at most "PhonAcq found the signal initially, and we then used these methods to validate what was found...."
The only way to effectively cheat on credits is if one of your machines crunches both work units -- otherwise, you can dramatically overclaim and the credits will be adjusted. "manual" intervention, punishment, etc. aside, it's hard to cheat effectively.
The danger is not the false positive -- a machine that catches a signal that isn't really there. The danger is false negatives -- signals that get missed. Remember that at least in theory, the number of valid extraterrestrial signals are distributed evenly across all work units -- and in theory, we may not have processed enough yet to have processed a work unit that contains an extraterrestrial signal. We're looking for a very small needle in a galaxy-sized haystack. |
Fred W Send message Joined: 13 Jun 99 Posts: 2524 Credit: 11,954,210 RAC: 0 |
What's the RAC on that new 3.6GHz Kitty Mark? My 3.52GHz Q9300 is at 4,986.29 RAC so far and It may still be slowly rising as It's the fastest cpu I have and It's nearest the swamp cooler too. Never mind, I looked It up. :D @JokerCPoC I reckon I've got the sweetest little 4-core on the block ATM. My Q9450 @ 3.56GHz on air with a TRUE has been on the first page of the top-hosts for the last few days. The only other non-Xtreme quaddie up there is an X3360 clocked at 4GHz by "Anonymous" so he's either got a very sweet chip or is chucking lots of Vcore at it and H2O cooling. F. |
JDWhale Send message Joined: 6 Apr 99 Posts: 921 Credit: 21,935,817 RAC: 3 |
I reckon I've got the sweetest little 4-core on the block ATM. My Q9450 @ 3.56GHz on air with a TRUE has been on the first page of the top-hosts for the last few days. The only other non-Xtreme quaddie up there is an X3360 clocked at 4GHz by "Anonymous" so he's either got a very sweet chip or is chucking lots of Vcore at it and H2O cooling. "TRUE" what? Creation Date of 12 Jun 2007 22:33:49 UTC ? Hmmmm... IIRC, you started up the Q9450 ~65 days ago... so you might still be collecting credits from the prior CPU? I don't have a problem with "plugging in" a new CPU, but you really should do a reset and start from scratch when you do the swap. JMO. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.