Message boards :
Number crunching :
Stock vs Chickens
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5
Author | Message |
---|---|
archae86 Send message Joined: 31 Aug 99 Posts: 909 Credit: 1,582,816 RAC: 0 ![]() |
Now seems as good a time as any to update my speed comparison chart. Nice data, thanks. The consistent advantage over stock across a big sample of Angle Range is encouraging to any doubters about using the improved aps. It seems that the 2.4v disadvantage is almost entirely confined to some results in about the 1.5 to 6 AR area. Do you suppose by luck those are a cluster of noisier WUs? There seem rather a lot of them for that to be the answer, but then, your machines can download rather a lot of WUs in short order when they are available. Also it seems that the plotting precedence (it appears that both yellow and turquoise mask pink) may be slightly exaggerating the difference by hiding some possible pink on the baseline. |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14656 Credit: 200,643,578 RAC: 874 ![]() ![]() |
Now seems as good a time as any to update my speed comparison chart. OK. Here's a version with 2.4v brought to the front of the Z-axis. I don't think that makes it seem any faster (except possibly in the VLAR spodge). Unfortunately, I didn't get a consistent enough set of data points betweem AR=0.02 and AR=0.3, which is where I suspect the interesting action is going to be. ![]() (direct link) I've tried to correct for noisy data - all those blobs sitting along the X-axis are -9 overflows (whatever actual CPU time they took). The really short ones I eliminated from the data entirely. Edit - to complete the set, let's give 2.2B the centre stage ![]() (direct link) |
archae86 Send message Joined: 31 Aug 99 Posts: 909 Credit: 1,582,816 RAC: 0 ![]() |
It clarifies that the 2.4v near 1.5 AR extends down to baseline, which was a bit hidden--but the points at hand certainly average inferior for it there. Thanks Unfortunately, I didn't get a consistent enough set of data points betweem AR=0.02 and AR=0.3, which is where I suspect the interesting action is going to be. Interesting in showing ap differences, but maybe not so important in the overall performance comparison. A simplistic model--usually the telescope is one of: 1. intentionally tracking a point in the sky -- A.R. near zero--seems to be about 0.01 in practice 2. motoring off to reach the next target -- A.R. high, mostly 1 to 5 3. not actively moving the point, so moving with the Earth -- A.R. near 0.4 If that simplistic model has some truth, then the .02 to .3 range may not represent a big enough fraction of WUs to be highly important to overall application performance. After I wrote to this point, I searched my PC and found a couple of CPU vs. AR graphs from my SETI Classic days. The graphs are for two different PCs, but may also differ enough in collection time to be different versions of the application. In case it is of interest, Dell1 is a desktop with a 930 MHz Coppermine CPU. Stoll5 was a desktop with Northwood (2.4 GHz, I think). Dell1 desk5 As you can tell from the execution times, these data were collected over pretty substantial timespans. Dell1 11/15/2000 to 2/21/2001 pastoll1-desk5 6/13/2003 to 10/6/2003 At least over those periods, WUs between .02 and .3 in Angle range were pretty scarce. my notes make it clear these data came from logs collected by SETISpy. (edited to label image URLs) |
archae86 Send message Joined: 31 Aug 99 Posts: 909 Credit: 1,582,816 RAC: 0 ![]() |
If that simplistic model has some truth, then the .02 to .3 range may not represent a big enough fraction of WUs to be highly important to overall application performance. I'm afraid I'm responding to my own post, but I realized that my saved graphs used a scale poor for this purpose. Here are two graphs of the same data, but with a log axis for Angle Range, and not clipping the higher AR data. log Dell1 log Desk5 |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14656 Credit: 200,643,578 RAC: 874 ![]() ![]() |
A simplistic model--usually the telescope is one of: All of which makes perfect sense. So why are we seeing such a huge number around the AR=1.48? I'll see if I can plot up a frequency distribution of ARs later, but for the moment they seem to be the commonest variety. We know the receiver can 'motor off' much faster than that - I've seen AR up to 15 - but these look much more deliberate. |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14656 Credit: 200,643,578 RAC: 874 ![]() ![]() |
I'll see if I can plot up a frequency distribution of ARs later.... ![]() (direct link) I've done my own log scale: this divides the observed AR range into 80 'buckets', at intervals of Ln(AR)=0.1 The X-axis labels are the lower bound of each bucket. |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 ![]() |
A simplistic model--usually the telescope is one of: Take a look at this GALFACTS page which shows how basketweave scanning works. That project will cover the whole Arecibo sky using that technique, other projects have used it for smaller areas. Kevin Douglas also described the technique in this SETI@home Staff Blog post. Joe |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14656 Credit: 200,643,578 RAC: 874 ![]() ![]() |
So a large part of the observing time is going to be at these AR=1.48 frequencies, which crunch in a quarter of the time of the 'standing still and looking straight up' AR=0.39 - it isn't just a quirk of the early MB tapes. From what we saw during the 'saw-tooth waveforms', the end-to-end splitter/filestore/download system just can't cope with the demand when these 'shorties' are being doled out: and the demand isn't going to get any lower - Moore's Law on the user-base processors will see to that. SETI@home is going to have, either, to offload a large number of volunteers onto other BOINC projects, or, seriously re-engineer that work unit file store system. |
Fred W Send message Joined: 13 Jun 99 Posts: 2524 Credit: 11,954,210 RAC: 0 ![]() |
I copy and paste in a whole page of 20 results, then click on each result in turn to open the result screen to get the AR and check for -9s. That's the only manual bit - if anyone knows how to get AR out of a standard logging tool, I'm sure we'd all be grateful. @Richard, Is this something that would still be useful? Not from a standard logging tool but I may have something to help (in Excel). |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14656 Credit: 200,643,578 RAC: 874 ![]() ![]() |
I copy and paste in a whole page of 20 results, then click on each result in turn to open the result screen to get the AR and check for -9s. That's the only manual bit - if anyone knows how to get AR out of a standard logging tool, I'm sure we'd all be grateful. You have a PM. |
DJStarfox Send message Joined: 23 May 01 Posts: 1066 Credit: 1,226,053 RAC: 2 ![]() |
SETI@home is going to have, either, to offload a large number of volunteers onto other BOINC projects, or, seriously re-engineer that work unit file store system. Maybe, instead of splitting each "chunk" of MB data into the 256 WU (then 512+ results), they could just double or even 8x the size of each WU. That would slow down the clients and decrease the number of files floating around the system. |
Astro ![]() Send message Joined: 16 Apr 02 Posts: 8026 Credit: 600,015 RAC: 0 |
SETI@home is going to have, either, to offload a large number of volunteers onto other BOINC projects, or, seriously re-engineer that work unit file store system. Hmmm, It's possible to change wus from 107 seconds to some other value (say double for this example)like 214 seconds, that would double the crunchtime and lower the number of connections. It would also double the deadlines, so the long ones now due in December, would be due in March/April next year. Meaning upwards of a 6 months max before credit would be granted(assuming the first two were returned, but if one timed out it could be another 6 months). |
DJStarfox Send message Joined: 23 May 01 Posts: 1066 Credit: 1,226,053 RAC: 2 ![]() |
Hmmm, It's possible to change wus from 107 seconds to some other value (say double for this example) like 214 seconds, that would double the crunchtime and lower the number of connections. It would also double the deadlines, so the long ones now due in December, would be due in March/April next year. Meaning upwards of a 6 months max before credit would be granted(assuming the first two were returned, but if one timed out it could be another 6 months). There-in lies the problem. There is no minimum system requirements for thie project (that I can find anywhere). It's simply, "Can you get it done before the deadline?" You have to admit though, even running on a 586 chip, 52 days is a long time to crunch one WU. I'd say 30 days is round number, but this should be decided with careful research of the system and not a forum addict's post. :) I have to trust that they set deadlines so far out on purpose for all the slow computers out there...but you have to wonder what *is* the slowest computer out there still crunching? I don't see that on the stats page. |
DJStarfox Send message Joined: 23 May 01 Posts: 1066 Credit: 1,226,053 RAC: 2 ![]() |
Hmmm, It's possible to change wus from 107 seconds to some other value (say double for this example)like 214 seconds, that would double the crunchtime and lower the number of connections. It would also double the deadlines, so the long ones now due in December, would be due in March/April next year. Meaning upwards of a 6 months max before credit would be granted(assuming the first two were returned, but if one timed out it could be another 6 months). I can see the scientists being given the news..... Sci1: We found an E.T. broadcast from space! Quick, we should answer back and say hi. Sci2: Uh, sir, the message was sent almost a year ago. Sci1: What?!? Why didn't I find out about this sooner? Sci2: The telescope data took a year to process because a 486 computer was working on it. So, we had to set computation deadlines that far out. Sci1: Bugger. They've probably given up on Earth and are looking elsewhere by now. |
Astro ![]() Send message Joined: 16 Apr 02 Posts: 8026 Credit: 600,015 RAC: 0 |
As far as I know, I have the slowest attached computer currently crunching. It's my pentium 60 (as in Mhz) with 48 meg of ram. You can read about it's progression in this thread NOTE: Sometime tomorrow it'll finish it's third wu on time. the longest so far took 36 days. Tomorrows return should show about 490 hour of crunchtime. |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 ![]() |
Hmmm, It's possible to change wus from 107 seconds to some other value (say double for this example) like 214 seconds, that would double the crunchtime and lower the number of connections. It would also double the deadlines, so the long ones now due in December, would be due in March/April next year. Meaning upwards of a 6 months max before credit would be granted(assuming the first two were returned, but if one timed out it could be another 6 months). The deadline is intended to represent how long a system with BOINC "Whetstone" benchmark of 33.33 MIPS would take to do a WU (or one with 100 WMIPS which is only on 8 hours a day, etc.). That is the minimum requirement as far as speed is concerned. Another is at least 31 MiB of RAM reported by BOINC. You have to admit though, even running on a 586 chip, 52 days is a long time to crunch one WU. I'd say 30 days is round number, but this should be decided with careful research of the system and not a forum addict's post. :) I have to trust that they set deadlines so far out on purpose for all the slow computers out there...but you have to wonder what *is* the slowest computer out there still crunching? I don't see that on the stats page. The P60 is close to the minimum, and has in effect demonstrated that for a WU at 0.726 AR (angle range) the deadline is close to correct at 25.55 days. The maximum deadline of 113 days for AR 0.226 WUs is ridiculous, the P60 could probably do four of those in that amount of time. But with the 15 day deadline for AR 1.11 the P60 would probably not be able to finish in time. Joe |
DJStarfox Send message Joined: 23 May 01 Posts: 1066 Credit: 1,226,053 RAC: 2 ![]() |
As far as I know, I have the slowest attached computer currently crunching. It's my pentium 60 (as in Mhz) with 48 meg of ram. You can read about it's progression in this thread NOTE: Sometime tomorrow it'll finish it's third wu on time. the longest so far took 36 days. Tomorrows return should show about 490 hour of crunchtime. 36 days (5 weeks exactly) sounds good. That should be the next deadline for all WU and is better than the current 8 weeks. Seeing as the P60 was the first 586 chip, I think you're right to say that's the slowest CPU that can crunch SETI. Some 486 chips did not have a math co-processor, so they would just be way too slow to consider. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.