Stock vs Chickens

Message boards : Number crunching : Stock vs Chickens
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5

AuthorMessage
archae86

Send message
Joined: 31 Aug 99
Posts: 909
Credit: 1,582,816
RAC: 0
United States
Message 636503 - Posted: 8 Sep 2007, 15:18:19 UTC - in response to Message 636436.  

Now seems as good a time as any to update my speed comparison chart.

There doesn't seem to be a lot of variation between the different optimisations, but if anything 2.2B is holding up well in comparison with the newer ones.

Nice data, thanks. The consistent advantage over stock across a big sample of Angle Range is encouraging to any doubters about using the improved aps.

It seems that the 2.4v disadvantage is almost entirely confined to some results in about the 1.5 to 6 AR area. Do you suppose by luck those are a cluster of noisier WUs? There seem rather a lot of them for that to be the answer, but then, your machines can download rather a lot of WUs in short order when they are available. Also it seems that the plotting precedence (it appears that both yellow and turquoise mask pink) may be slightly exaggerating the difference by hiding some possible pink on the baseline.

ID: 636503 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14656
Credit: 200,643,578
RAC: 874
United Kingdom
Message 636528 - Posted: 8 Sep 2007, 15:53:04 UTC - in response to Message 636503.  
Last modified: 8 Sep 2007, 15:59:52 UTC

Now seems as good a time as any to update my speed comparison chart.

There doesn't seem to be a lot of variation between the different optimisations, but if anything 2.2B is holding up well in comparison with the newer ones.

Nice data, thanks. The consistent advantage over stock across a big sample of Angle Range is encouraging to any doubters about using the improved aps.

It seems that the 2.4v disadvantage is almost entirely confined to some results in about the 1.5 to 6 AR area. Do you suppose by luck those are a cluster of noisier WUs? There seem rather a lot of them for that to be the answer, but then, your machines can download rather a lot of WUs in short order when they are available. Also it seems that the plotting precedence (it appears that both yellow and turquoise mask pink) may be slightly exaggerating the difference by hiding some possible pink on the baseline.

OK. Here's a version with 2.4v brought to the front of the Z-axis. I don't think that makes it seem any faster (except possibly in the VLAR spodge). Unfortunately, I didn't get a consistent enough set of data points betweem AR=0.02 and AR=0.3, which is where I suspect the interesting action is going to be.


(direct link)

I've tried to correct for noisy data - all those blobs sitting along the X-axis are -9 overflows (whatever actual CPU time they took). The really short ones I eliminated from the data entirely.

Edit - to complete the set, let's give 2.2B the centre stage


(direct link)
ID: 636528 · Report as offensive
archae86

Send message
Joined: 31 Aug 99
Posts: 909
Credit: 1,582,816
RAC: 0
United States
Message 636642 - Posted: 8 Sep 2007, 17:56:46 UTC - in response to Message 636528.  
Last modified: 8 Sep 2007, 18:03:55 UTC


OK. Here's a version with 2.4v brought to the front of the Z-axis. I don't think that makes it seem any faster (except possibly in the VLAR spodge).

It clarifies that the 2.4v near 1.5 AR extends down to baseline, which was a bit hidden--but the points at hand certainly average inferior for it there. Thanks
Unfortunately, I didn't get a consistent enough set of data points betweem AR=0.02 and AR=0.3, which is where I suspect the interesting action is going to be.

Interesting in showing ap differences, but maybe not so important in the overall performance comparison.

A simplistic model--usually the telescope is one of:
1. intentionally tracking a point in the sky -- A.R. near zero--seems to be about 0.01 in practice
2. motoring off to reach the next target -- A.R. high, mostly 1 to 5
3. not actively moving the point, so moving with the Earth -- A.R. near 0.4

If that simplistic model has some truth, then the .02 to .3 range may not represent a big enough fraction of WUs to be highly important to overall application performance.

After I wrote to this point, I searched my PC and found a couple of CPU vs. AR graphs from my SETI Classic days. The graphs are for two different PCs, but may also differ enough in collection time to be different versions of the application.

In case it is of interest, Dell1 is a desktop with a 930 MHz Coppermine CPU. Stoll5 was a desktop with Northwood (2.4 GHz, I think).

Dell1
desk5

As you can tell from the execution times, these data were collected over pretty substantial timespans.
Dell1 11/15/2000 to 2/21/2001
pastoll1-desk5 6/13/2003 to 10/6/2003

At least over those periods, WUs between .02 and .3 in Angle range were pretty scarce.

my notes make it clear these data came from logs collected by SETISpy.

(edited to label image URLs)

ID: 636642 · Report as offensive
archae86

Send message
Joined: 31 Aug 99
Posts: 909
Credit: 1,582,816
RAC: 0
United States
Message 636723 - Posted: 8 Sep 2007, 19:21:22 UTC - in response to Message 636642.  

If that simplistic model has some truth, then the .02 to .3 range may not represent a big enough fraction of WUs to be highly important to overall application performance.

After I wrote to this point, I searched my PC and found a couple of CPU vs. AR graphs from my SETI Classic days.
<snip>
Dell1
desk5
<snip>

At least over those periods, WUs between .02 and .3 in Angle range were pretty scarce.

I'm afraid I'm responding to my own post, but I realized that my saved graphs used a scale poor for this purpose. Here are two graphs of the same data, but with a log axis for Angle Range, and not clipping the higher AR data.

log Dell1
log Desk5

ID: 636723 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14656
Credit: 200,643,578
RAC: 874
United Kingdom
Message 637108 - Posted: 9 Sep 2007, 10:42:54 UTC - in response to Message 636642.  

A simplistic model--usually the telescope is one of:
1. intentionally tracking a point in the sky -- A.R. near zero--seems to be about 0.01 in practice
2. motoring off to reach the next target -- A.R. high, mostly 1 to 5
3. not actively moving the point, so moving with the Earth -- A.R. near 0.4

All of which makes perfect sense.

So why are we seeing such a huge number around the AR=1.48? I'll see if I can plot up a frequency distribution of ARs later, but for the moment they seem to be the commonest variety.

We know the receiver can 'motor off' much faster than that - I've seen AR up to 15 - but these look much more deliberate.
ID: 637108 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14656
Credit: 200,643,578
RAC: 874
United Kingdom
Message 637122 - Posted: 9 Sep 2007, 12:08:32 UTC - in response to Message 637108.  

I'll see if I can plot up a frequency distribution of ARs later....



(direct link)

I've done my own log scale: this divides the observed AR range into 80 'buckets', at intervals of Ln(AR)=0.1

The X-axis labels are the lower bound of each bucket.
ID: 637122 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 637178 - Posted: 9 Sep 2007, 14:17:13 UTC - in response to Message 637108.  

A simplistic model--usually the telescope is one of:
1. intentionally tracking a point in the sky -- A.R. near zero--seems to be about 0.01 in practice
2. motoring off to reach the next target -- A.R. high, mostly 1 to 5
3. not actively moving the point, so moving with the Earth -- A.R. near 0.4

All of which makes perfect sense.

So why are we seeing such a huge number around the AR=1.48? I'll see if I can plot up a frequency distribution of ARs later, but for the moment they seem to be the commonest variety.

We know the receiver can 'motor off' much faster than that - I've seen AR up to 15 - but these look much more deliberate.

Take a look at this GALFACTS page which shows how basketweave scanning works. That project will cover the whole Arecibo sky using that technique, other projects have used it for smaller areas. Kevin Douglas also described the technique in this SETI@home Staff Blog post.
                                                               Joe
ID: 637178 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14656
Credit: 200,643,578
RAC: 874
United Kingdom
Message 637208 - Posted: 9 Sep 2007, 15:15:37 UTC

So a large part of the observing time is going to be at these AR=1.48 frequencies, which crunch in a quarter of the time of the 'standing still and looking straight up' AR=0.39 - it isn't just a quirk of the early MB tapes.

From what we saw during the 'saw-tooth waveforms', the end-to-end splitter/filestore/download system just can't cope with the demand when these 'shorties' are being doled out: and the demand isn't going to get any lower - Moore's Law on the user-base processors will see to that.

SETI@home is going to have, either, to offload a large number of volunteers onto other BOINC projects, or, seriously re-engineer that work unit file store system.
ID: 637208 · Report as offensive
Fred W
Volunteer tester

Send message
Joined: 13 Jun 99
Posts: 2524
Credit: 11,954,210
RAC: 0
United Kingdom
Message 648631 - Posted: 25 Sep 2007, 21:50:35 UTC - in response to Message 633567.  

I copy and paste in a whole page of 20 results, then click on each result in turn to open the result screen to get the AR and check for -9s. That's the only manual bit - if anyone knows how to get AR out of a standard logging tool, I'm sure we'd all be grateful.

PM me with an email adress if you'd like a copy of the spreadsheet (with or without current data)


@Richard,

Is this something that would still be useful? Not from a standard logging tool but I may have something to help (in Excel).
ID: 648631 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14656
Credit: 200,643,578
RAC: 874
United Kingdom
Message 648709 - Posted: 25 Sep 2007, 23:04:46 UTC - in response to Message 648631.  

I copy and paste in a whole page of 20 results, then click on each result in turn to open the result screen to get the AR and check for -9s. That's the only manual bit - if anyone knows how to get AR out of a standard logging tool, I'm sure we'd all be grateful.

PM me with an email adress if you'd like a copy of the spreadsheet (with or without current data)


@Richard,

Is this something that would still be useful? Not from a standard logging tool but I may have something to help (in Excel).

You have a PM.
ID: 648709 · Report as offensive
DJStarfox

Send message
Joined: 23 May 01
Posts: 1066
Credit: 1,226,053
RAC: 2
United States
Message 648777 - Posted: 26 Sep 2007, 2:04:53 UTC - in response to Message 637208.  

SETI@home is going to have, either, to offload a large number of volunteers onto other BOINC projects, or, seriously re-engineer that work unit file store system.


Maybe, instead of splitting each "chunk" of MB data into the 256 WU (then 512+ results), they could just double or even 8x the size of each WU. That would slow down the clients and decrease the number of files floating around the system.
ID: 648777 · Report as offensive
Astro
Volunteer tester
Avatar

Send message
Joined: 16 Apr 02
Posts: 8026
Credit: 600,015
RAC: 0
Message 648786 - Posted: 26 Sep 2007, 2:20:08 UTC - in response to Message 648777.  

SETI@home is going to have, either, to offload a large number of volunteers onto other BOINC projects, or, seriously re-engineer that work unit file store system.


Maybe, instead of splitting each "chunk" of MB data into the 256 WU (then 512+ results), they could just double or even 8x the size of each WU. That would slow down the clients and decrease the number of files floating around the system.

Hmmm, It's possible to change wus from 107 seconds to some other value (say double for this example)like 214 seconds, that would double the crunchtime and lower the number of connections. It would also double the deadlines, so the long ones now due in December, would be due in March/April next year. Meaning upwards of a 6 months max before credit would be granted(assuming the first two were returned, but if one timed out it could be another 6 months).
ID: 648786 · Report as offensive
DJStarfox

Send message
Joined: 23 May 01
Posts: 1066
Credit: 1,226,053
RAC: 2
United States
Message 648794 - Posted: 26 Sep 2007, 2:37:49 UTC - in response to Message 648786.  

Hmmm, It's possible to change wus from 107 seconds to some other value (say double for this example) like 214 seconds, that would double the crunchtime and lower the number of connections. It would also double the deadlines, so the long ones now due in December, would be due in March/April next year. Meaning upwards of a 6 months max before credit would be granted(assuming the first two were returned, but if one timed out it could be another 6 months).


There-in lies the problem. There is no minimum system requirements for thie project (that I can find anywhere). It's simply, "Can you get it done before the deadline?"

You have to admit though, even running on a 586 chip, 52 days is a long time to crunch one WU. I'd say 30 days is round number, but this should be decided with careful research of the system and not a forum addict's post. :) I have to trust that they set deadlines so far out on purpose for all the slow computers out there...but you have to wonder what *is* the slowest computer out there still crunching? I don't see that on the stats page.
ID: 648794 · Report as offensive
DJStarfox

Send message
Joined: 23 May 01
Posts: 1066
Credit: 1,226,053
RAC: 2
United States
Message 648803 - Posted: 26 Sep 2007, 2:44:16 UTC - in response to Message 648786.  

Hmmm, It's possible to change wus from 107 seconds to some other value (say double for this example)like 214 seconds, that would double the crunchtime and lower the number of connections. It would also double the deadlines, so the long ones now due in December, would be due in March/April next year. Meaning upwards of a 6 months max before credit would be granted(assuming the first two were returned, but if one timed out it could be another 6 months).


I can see the scientists being given the news.....

Sci1: We found an E.T. broadcast from space! Quick, we should answer back and say hi.
Sci2: Uh, sir, the message was sent almost a year ago.
Sci1: What?!? Why didn't I find out about this sooner?
Sci2: The telescope data took a year to process because a 486 computer was working on it. So, we had to set computation deadlines that far out.
Sci1: Bugger. They've probably given up on Earth and are looking elsewhere by now.
ID: 648803 · Report as offensive
Astro
Volunteer tester
Avatar

Send message
Joined: 16 Apr 02
Posts: 8026
Credit: 600,015
RAC: 0
Message 648808 - Posted: 26 Sep 2007, 2:51:43 UTC

As far as I know, I have the slowest attached computer currently crunching. It's my pentium 60 (as in Mhz) with 48 meg of ram. You can read about it's progression in this thread NOTE: Sometime tomorrow it'll finish it's third wu on time. the longest so far took 36 days. Tomorrows return should show about 490 hour of crunchtime.
ID: 648808 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 648810 - Posted: 26 Sep 2007, 2:56:32 UTC - in response to Message 648794.  

Hmmm, It's possible to change wus from 107 seconds to some other value (say double for this example) like 214 seconds, that would double the crunchtime and lower the number of connections. It would also double the deadlines, so the long ones now due in December, would be due in March/April next year. Meaning upwards of a 6 months max before credit would be granted(assuming the first two were returned, but if one timed out it could be another 6 months).


There-in lies the problem. There is no minimum system requirements for thie project (that I can find anywhere). It's simply, "Can you get it done before the deadline?"

The deadline is intended to represent how long a system with BOINC "Whetstone" benchmark of 33.33 MIPS would take to do a WU (or one with 100 WMIPS which is only on 8 hours a day, etc.). That is the minimum requirement as far as speed is concerned. Another is at least 31 MiB of RAM reported by BOINC.

You have to admit though, even running on a 586 chip, 52 days is a long time to crunch one WU. I'd say 30 days is round number, but this should be decided with careful research of the system and not a forum addict's post. :) I have to trust that they set deadlines so far out on purpose for all the slow computers out there...but you have to wonder what *is* the slowest computer out there still crunching? I don't see that on the stats page.

The P60 is close to the minimum, and has in effect demonstrated that for a WU at 0.726 AR (angle range) the deadline is close to correct at 25.55 days. The maximum deadline of 113 days for AR 0.226 WUs is ridiculous, the P60 could probably do four of those in that amount of time. But with the 15 day deadline for AR 1.11 the P60 would probably not be able to finish in time.
                                                                  Joe
ID: 648810 · Report as offensive
DJStarfox

Send message
Joined: 23 May 01
Posts: 1066
Credit: 1,226,053
RAC: 2
United States
Message 648996 - Posted: 26 Sep 2007, 13:31:06 UTC - in response to Message 648808.  

As far as I know, I have the slowest attached computer currently crunching. It's my pentium 60 (as in Mhz) with 48 meg of ram. You can read about it's progression in this thread NOTE: Sometime tomorrow it'll finish it's third wu on time. the longest so far took 36 days. Tomorrows return should show about 490 hour of crunchtime.


36 days (5 weeks exactly) sounds good. That should be the next deadline for all WU and is better than the current 8 weeks. Seeing as the P60 was the first 586 chip, I think you're right to say that's the slowest CPU that can crunch SETI. Some 486 chips did not have a math co-processor, so they would just be way too slow to consider.
ID: 648996 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5

Message boards : Number crunching : Stock vs Chickens


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.