How much RAC does SETI@home need for pseudo-realtime analysis?


log in

Advanced search

Message boards : Number crunching : How much RAC does SETI@home need for pseudo-realtime analysis?

Author Message
Shadow
Send message
Joined: 29 Jul 08
Posts: 6
Credit: 3,836,310
RAC: 0
United States
Message 802629 - Posted: 27 Aug 2008, 21:38:17 UTC

Greetings folks, I'm new to the whole BOINC scene so please forgive my ignorance. I have a technical question about the SETI@home project itself:

Assuming that SETI@home had sufficient server resources and bandwidth, how high of a sustained RAC does SETI@home need to process workunits as fast as it receives them from the data feeds it gets from the telescope? Disregarding the latency incurred by multiple validations and actually receiving the raw data in the first place. I am unclear as to whether this can be determined by looking at the result turn-around time on the Server status page.

For now I'm just curious about the standard multi-beam analysis and not Astropulse, although it would be fascinating to know that too if someone knows how to figure it out.

Thank you!

P.S. it would be awesome if someday this was figured out and calculated on the Server status page (perhaps as a ratio - Current RAC/Desired RAC?) It would give some of us something more "tangible" to focus on when determining what types of resources would be necessary to contribute a desired amount of processing to the project.

Profile Keith T.
Volunteer tester
Avatar
Send message
Joined: 23 Aug 99
Posts: 738
Credit: 232,825
RAC: 18
United Kingdom
Message 802700 - Posted: 28 Aug 2008, 1:22:04 UTC - in response to Message 802629.
Last modified: 28 Aug 2008, 1:23:34 UTC

Greetings folks, I'm new to the whole BOINC scene so please forgive my ignorance. I have a technical question about the SETI@home project itself:

Assuming that SETI@home had sufficient server resources and bandwidth, how high of a sustained RAC does SETI@home need to process workunits as fast as it receives them from the data feeds it gets from the telescope? Disregarding the latency incurred by multiple validations and actually receiving the raw data in the first place. I am unclear as to whether this can be determined by looking at the result turn-around time on the Server status page.

For now I'm just curious about the standard multi-beam analysis and not Astropulse, although it would be fascinating to know that too if someone knows how to figure it out.

Thank you!

P.S. it would be awesome if someday this was figured out and calculated on the Server status page (perhaps as a ratio - Current RAC/Desired RAC?) It would give some of us something more "tangible" to focus on when determining what types of resources would be necessary to contribute a desired amount of processing to the project.



Not sure if I totally understand your question.

The data is collected at Arecibo Observatory and is shipped to the labs at UC Berkeley, Space Sciences Laboratory, by courier, on 750 GB HDD's. (It used to be DLT tapes, and the project staff still refer to a recording session on disk as a "tape").

Of course the data that is received at Arecibo may have taken thousands of years to arrive at the telescope, from some distant star system.

HTH, Keith
____________
Sir Arthur C Clarke 1917-2008

1mp0£173
Volunteer tester
Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 802708 - Posted: 28 Aug 2008, 2:34:44 UTC - in response to Message 802629.

Greetings folks, I'm new to the whole BOINC scene so please forgive my ignorance. I have a technical question about the SETI@home project itself:

Assuming that SETI@home had sufficient server resources and bandwidth, how high of a sustained RAC does SETI@home need to process workunits as fast as it receives them from the data feeds it gets from the telescope? Disregarding the latency incurred by multiple validations and actually receiving the raw data in the first place. I am unclear as to whether this can be determined by looking at the result turn-around time on the Server status page.

For now I'm just curious about the standard multi-beam analysis and not Astropulse, although it would be fascinating to know that too if someone knows how to figure it out.

Thank you!

P.S. it would be awesome if someday this was figured out and calculated on the Server status page (perhaps as a ratio - Current RAC/Desired RAC?) It would give some of us something more "tangible" to focus on when determining what types of resources would be necessary to contribute a desired amount of processing to the project.


Interesting question.

To rephrase: how fast do we have to crunch to stay "caught up" with data from the telescope?

The problem is that the telescope may be off line (or the receiver may be off line) for extended periods. I'm sure someone who watches can tell us how many actual "observing days" or "recording days" are available in the average year.

I suspect that we're actually crunching faster, on average, than data arrives.
____________

Shadow
Send message
Joined: 29 Jul 08
Posts: 6
Credit: 3,836,310
RAC: 0
United States
Message 802710 - Posted: 28 Aug 2008, 2:39:09 UTC - in response to Message 802700.
Last modified: 28 Aug 2008, 2:42:39 UTC



Not sure if I totally understand your question.

The data is collected at Arecibo Observatory and is shipped to the labs at UC Berkeley, Space Sciences Laboratory, by courier, on 750 GB HDD's. (It used to be DLT tapes, and the project staff still refer to a recording session on disk as a "tape").

Of course the data that is received at Arecibo may have taken thousands of years to arrive at the telescope, from some distant star system.

HTH, Keith


Sorry, let me clarify:

Assume for a second I had an infinite amount of money to spend on computing power (which I don't, I'm just curious), and I wanted to process the entire batch in the same amount of time as it would normally take for another batch to arrive. How much computing power (in units of RAC) would I have to purchase to do this?

A hypothetical answer would be:

"Well dude, we get a new batch every 'X' days. So if you wanted to be able to complete an entire batch worth of processing in X or less days, given infinite bandwidth and server capacity on our end, you would need a sustained RAC of about 'Y' to do it."

EDIT:
Ned Ludd worded it even better than I did the second time around too...

Alinator
Volunteer tester
Send message
Joined: 19 Apr 05
Posts: 4178
Credit: 4,647,982
RAC: 0
United States
Message 802712 - Posted: 28 Aug 2008, 2:51:22 UTC

If what you mean is to process it as fast as the telescope can collect it (when it's actually collecting data which SAH can use), you're talking orders of magnitude more crunching horsepower than what we have currently available.

Joe Segur probably has a pretty good ballpark figure worked out from the last time the question came up.

Alinator

Shadow
Send message
Joined: 29 Jul 08
Posts: 6
Credit: 3,836,310
RAC: 0
United States
Message 802745 - Posted: 28 Aug 2008, 6:42:12 UTC - in response to Message 802712.
Last modified: 28 Aug 2008, 6:45:41 UTC

If what you mean is to process it as fast as the telescope can collect it (when it's actually collecting data which SAH can use), you're talking orders of magnitude more crunching horsepower than what we have currently available.

Joe Segur probably has a pretty good ballpark figure worked out from the last time the question came up.

Alinator


Thank you Alinator. From the mention of Joe's name I was able to track down a post:

http://setiathome.berkeley.edu/forum_thread.php?id=45805&nowrap=true#723956

From there I got some numbers to crunch...

If the receiver picks up 151,793 work units / hour (per Joe), and receives for an average of 9.5 hours on any given day (per Joe), and I assume that the average multi-beam workunit leads to 75 claimed credits, then we have 108,152,512.5 credits / day of work before replication.

If I then assume that a work unit needs to be crunched an average of 2.5 times, I come up with an average workload of 270,381,281.25 credits / day.

From http://boincstats.com/stats/project_graph.php?pr=sah:

SETI@home is currently calculating with a RAC of 45,648,330. Since each host that crunches a work unit also claims credit for it, I do not need to adjust for the average 2.5 crunches per work unit. Therefore, SETI@home is currently calculating at about 17% of the rate needed for "pseudo-realtime" analysis.

Where I could use more clarification at this point then, assuming my calculations aren't completely out in left field, is a better approximation for the average number of credits per work unit, and the average number of times a work unit is likely to be processed before being retired.

Of course this does not apply to Astropulse, which I guess is a more important figure to want to know at this point.

WinterKnight
Volunteer tester
Send message
Joined: 18 May 99
Posts: 8686
Credit: 25,033,220
RAC: 30,000
United Kingdom
Message 802777 - Posted: 28 Aug 2008, 11:30:52 UTC - in response to Message 802745.

If what you mean is to process it as fast as the telescope can collect it (when it's actually collecting data which SAH can use), you're talking orders of magnitude more crunching horsepower than what we have currently available.

Joe Segur probably has a pretty good ballpark figure worked out from the last time the question came up.

Alinator


Thank you Alinator. From the mention of Joe's name I was able to track down a post:

http://setiathome.berkeley.edu/forum_thread.php?id=45805&nowrap=true#723956

From there I got some numbers to crunch...

If the receiver picks up 151,793 work units / hour (per Joe), and receives for an average of 9.5 hours on any given day (per Joe), and I assume that the average multi-beam workunit leads to 75 claimed credits, then we have 108,152,512.5 credits / day of work before replication.

If I then assume that a work unit needs to be crunched an average of 2.5 times, I come up with an average workload of 270,381,281.25 credits / day.

From http://boincstats.com/stats/project_graph.php?pr=sah:

SETI@home is currently calculating with a RAC of 45,648,330. Since each host that crunches a work unit also claims credit for it, I do not need to adjust for the average 2.5 crunches per work unit. Therefore, SETI@home is currently calculating at about 17% of the rate needed for "pseudo-realtime" analysis.

Where I could use more clarification at this point then, assuming my calculations aren't completely out in left field, is a better approximation for the average number of credits per work unit, and the average number of times a work unit is likely to be processed before being retired.

Of course this does not apply to Astropulse, which I guess is a more important figure to want to know at this point.

before Eric's credit adjustment, I believe the average cr/task was 53cr. And therefore if the 15% reduction target is reached that could be adjusted to 46/task.
And going by the tasks/workunits to be purged figures it would seem only ~8% of workunits need to be replicated more than twice.

Shadow
Send message
Joined: 29 Jul 08
Posts: 6
Credit: 3,836,310
RAC: 0
United States
Message 802793 - Posted: 28 Aug 2008, 13:25:44 UTC - in response to Message 802777.


before Eric's credit adjustment, I believe the average cr/task was 53cr. And therefore if the 15% reduction target is reached that could be adjusted to 46/task.
And going by the tasks/workunits to be purged figures it would seem only ~8% of workunits need to be replicated more than twice.


Ah, cool. The revised estimate is then 137,973,765.28 credits / day, which means SETI@home actually has about 33% of the computing power it needs for the standard multi-beam project.

Profile S@NL - Eesger - www.knoop.nl
Avatar
Send message
Joined: 7 Oct 01
Posts: 384
Credit: 37,202,872
RAC: 14,430
Netherlands
Message 802803 - Posted: 28 Aug 2008, 14:44:05 UTC

if the 33% is about accurate and I take a look at the Berkeley data density page.

Then I'dd say that of the last aprox. three years there were 190 'productive days', meaning that the Arecibo Observatory was 190/(356*3) was about 17% "SETI productive".

If this assumption is correct, then we are here a two times the "pseudo-realtime" analysis barrier!?
____________
The SETI@Home Gauntlet 2012 april 16 - 30| info / chat | STATS

Alinator
Volunteer tester
Send message
Joined: 19 Apr 05
Posts: 4178
Credit: 4,647,982
RAC: 0
United States
Message 802846 - Posted: 28 Aug 2008, 17:31:42 UTC
Last modified: 28 Aug 2008, 18:18:51 UTC

Hmmmm...

I'm not sure working straight from RAC is the best way to approach the question.

The telescope is recording 7, 2.5 MHz bandwidth beams, at 2 polarizations, or 14 channels of data, when the opportunity arises. From that, it gets divided into 256 10 KHz subbands.

After that, the splitter breaks the datastream into workunits of around 107 seconds in length, which then get further replicated into 2 tasks per WU for processing by the hosts.

In addition, we have to account for the fact that there is ~20% overlap in the time domain WU to WU, as well as some overlap (~10%, IIRC) from data drive to data drive.

So let's say we look at a 24 hour period of constant data collection, and forget about drive overlap for this period:

WU 'periods' per day = 86400 / (107 / .2) = 535

<edit> I seem to have a mistake here, I must recompute!

Hmmm... It looks like that should be: 86400 / (107 * .8) = 1,009.3458

Somebody want to double check that? Corrected values in parentheses.

Total Workunits per day = 535 * (7 * 2 * 256) = 1,917,440 (3,617,495.3271)

Therefore: Total Tasks Replicated per day = 3,834,880 (7,234,990.6542)

If we then take Winterknight's estimation of the failure rate, that makes it:

Total Workunits per day = 3,988,275 (7,524,390.2804)

Looking at the current # 1 Host, it seems to be running tasks in the 3000 seconds ballpark currently.

So we are talking on the order of 11,964,825,000 (22,573170841.2) seconds to run all the tasks in the 24 hours of telescope time collected. Since this host is an 8 banger, that works out to 1,495,603,125 (2,821646355.15) seconds for #1 host to do all the work collected in one day.

So divide that by 86400 to find how many #1 equivalents you need to do it all in one actual day and you get about 17310 (32,658) hosts.

Now take that number of #1 hosts and multiply by it's reported RAC (assuming it is at steady state currently) and you come up with a 'required' RAC of about 203,545,354 (384,014,312.076).

However, the host population isn't all # 1 hosts, so going to BOINCStats we find that the aggregate RAC for the project is:

Aggregate RAC per active host = 45,648,330 / 332,973 = 137.0932

So it would appear that as a group, we are about 6 orders of magnitude short of the horsepower we need to keep up if SAH was fortunate enough to have access to constant surveillance with a telescope equivalent to the gear currently at Arecibo!

Send that little tidbit to your Senators and House Representative to consider, before they start talking about giving Arecibo the financial ax, because it's old and 'obsolete', or not 'flexible' enough! :-D

Alinator

Josef W. SegurProject donor
Volunteer developer
Volunteer tester
Send message
Joined: 30 Oct 99
Posts: 4305
Credit: 1,074,011
RAC: 1,230
United States
Message 802859 - Posted: 28 Aug 2008, 18:41:09 UTC - in response to Message 802803.

if the 33% is about accurate and I take a look at the Berkeley data density page.

Then I'dd say that of the last aprox. three years there were 190 'productive days', meaning that the Arecibo Observatory was 190/(356*3) was about 17% "SETI productive".

If this assumption is correct, then we are here a two times the "pseudo-realtime" analysis barrier!?

The multibeam recorder was installed in late June 2006, so about 2.17 calendar years ago. Then there was the nearly 9 months of downtime for repainting, which perhaps should be subtracted. I'd call it 17 months of operation, perhaps 520 days. Based on what's been split, roughly 36.5% of days produced some data.

The MB group count divided by 520 days gives nearly 1293 groups per day on average. That's 331008 WUs per day, and reflects the MB work which has been done (and is being done now or in queue). The real question is how much data has been returned from Arecibo and not yet split. Some is kept locally and some put into storage at LBNL HPSS, we don't know even an approximate total.

The MB group count multiplied by about 6.38 approximates how many AstroPulse WUs could be made from the same data; call it 2.8 million. As a very rough guess we've done maybe 50 thousand of those. The factor for Line Feed work (all called Classic on the data density page) is about 5.96 since it had more overlap than MultiBeam; there's another potential 6.9 million AP WUs.

My feeling is that we have about the crunching power needed to keep up with incoming Arecibo data IF it were all Very High Angle Range (shorty) WUs and the project didn't go into meltdown. For the long term mix of angle ranges and assuming the same data is split for AP, I think we only have about one half to one third of the real time capability. OTOH, if Arecibo in future is only funded for looking for Near Earth Asteroids with the radar, there won't be any ALFA data to record.
Joe

Shadow
Send message
Joined: 29 Jul 08
Posts: 6
Credit: 3,836,310
RAC: 0
United States
Message 802910 - Posted: 28 Aug 2008, 22:38:08 UTC - in response to Message 802859.
Last modified: 28 Aug 2008, 22:39:00 UTC

"Josef W. Segur" wrote:

The multibeam recorder was installed in late June 2006, so about 2.17 calendar years ago. Then there was the nearly 9 months of downtime for repainting, which perhaps should be subtracted. I'd call it 17 months of operation, perhaps 520 days. Based on what's been split, roughly 36.5% of days produced some data.

The MB group count divided by 520 days gives nearly 1293 groups per day on average. That's 331008 WUs per day, and reflects the MB work which has been done (and is being done now or in queue). The real question is how much data has been returned from Arecibo and not yet split. Some is kept locally and some put into storage at LBNL HPSS, we don't know even an approximate total.

The MB group count multiplied by about 6.38 approximates how many AstroPulse WUs could be made from the same data; call it 2.8 million. As a very rough guess we've done maybe 50 thousand of those. The factor for Line Feed work (all called Classic on the data density page) is about 5.96 since it had more overlap than MultiBeam; there's another potential 6.9 million AP WUs.

My feeling is that we have about the crunching power needed to keep up with incoming Arecibo data IF it were all Very High Angle Range (shorty) WUs and the project didn't go into meltdown. For the long term mix of angle ranges and assuming the same data is split for AP, I think we only have about one half to one third of the real time capability. OTOH, if Arecibo in future is only funded for looking for Near Earth Asteroids with the radar, there won't be any ALFA data to record.
Joe

OK then, we have a new round of estimates, now including Astropulse. The new assumptions are now that the average Astropulse work unit yields 750 credits and takes 25% longer to achieve then an equivalent number of multi-beam work units. Two different amounts are computed, the average credit needed per day and the peak credit needed per day:

RF = Replication Factor (How many times a work unit has to be processed before being retired)
AP AC adjustment factor = (How long an AP credit takes to process vs. a MB credit)

Average Credit Per Day Needed
--------------------------------------------------------------------------------
331,008 MB WUs / day
331,008 * 6.38 (AP WU factor) = 2,111,831.04 AP WUs / day

Average MB WU: 46 credits
Average AP WU: 750 credits

15,226,368 MB credits/day * replication factor 2.08 = 31,670,845.44
1,583,873,280 AP credits/day * replication factor 2.08 * AP AC adjustment factor 1.25 = 4,118,070,528

MB workload: 31,670,845.44 MB credits / day
AP workload: 4,118,070,528 credits / day

SETI@Home Average Credit Needed / day = 4,149,741,373.4
SETI@Home RAC (08-28-2008) = 45,648,330.00
Which is 1.1% of total needed

Peak Credit Per Day Needed
--------------------------------------------------------------------------------
151,793 MB WUs / hour * 9.5 hours/day = 1,442,033.5 MB WUs / day
1,442,033.5 * 6.38 = 9,200,173.73 AP WUs / day

Average MB WU: 46 credits
Average AP WU: 750 credits

66,333,541 MB credits/day * replication factor 2.08 = 137,973,765.28
6,900,130,297.5 AP credits/day * replication factor 2.08 * AP AC adjustment factor 1.25 = 17,940,338,773.5

MB workload: 137,973,765.28 credits / day
AP workload: 17,940,338,773.5 credits / day

SETI@Home Peak Credit Needed / day = 18,078,312,538.78
SETI@Home RAC (08-28-2008) = 45,648,330.00
Which is 0.25% of total needed

--------------------------------------------------------------------------------

In summary, we are running at up to 1.1% of the capacity required to process ALL of SETI@Home's workload as fast as new data is sampled. Also, Astropulse would represent over 99% of the entire workload.

"Josef W. Segur" wrote:

My feeling is that we have about the crunching power needed to keep up with incoming Arecibo data IF it were all Very High Angle Range (shorty) WUs and the project didn't go into meltdown. For the long term mix of angle ranges and assuming the same data is split for AP, I think we only have about one half to one third of the real time capability. OTOH, if Arecibo in future is only funded for looking for Near Earth Asteroids with the radar, there won't be any ALFA data to record.

The calculations don't seem to support Joe's conclusions, but I suspect that is because the average credit / AP WU prediction is way off. I estimated this value from the few AP WU units I saw go through my queue before I switched to optimized clients.

Josef W. SegurProject donor
Volunteer developer
Volunteer tester
Send message
Joined: 30 Oct 99
Posts: 4305
Credit: 1,074,011
RAC: 1,230
United States
Message 803193 - Posted: 29 Aug 2008, 18:36:52 UTC - in response to Message 802910.

...
Average Credit Per Day Needed
--------------------------------------------------------------------------------
331,008 MB WUs / day
331,008 * 6.38 (AP WU factor) = 2,111,831.04 AP WUs / day
...

The 6.38 multiplier is for a group of MB WUs, data for each group of 256 MB Wus can also produce 6.38 AP WUs. So to calculate from individual MB WUs it's 6.38/256 ~= 0.025. Then 331,008 MB WUs / day gives about 8275 AP WUs / day.
Joe

Shadow
Send message
Joined: 29 Jul 08
Posts: 6
Credit: 3,836,310
RAC: 0
United States
Message 803255 - Posted: 29 Aug 2008, 23:20:14 UTC - in response to Message 803193.
Last modified: 29 Aug 2008, 23:37:21 UTC


The 6.38 multiplier is for a group of MB WUs, data for each group of 256 MB Wus can also produce 6.38 AP WUs. So to calculate from individual MB WUs it's 6.38/256 ~= 0.025. Then 331,008 MB WUs / day gives about 8275 AP WUs / day.
Joe


Oh, wow! I redid the calculations using this information and they are now inline with Joe's estimates:

RF = Replication Factor (How many times a work unit has to be processed before being retired)
AP AC adjustment factor = (How long an AP credit takes to process vs. a MB credit)

Average Credit Per Day Needed
--------------------------------------------------------------------------------
331,008 MB WUs / day
(331,008/256) * 6.38 (AP WU factor) = 8249 AP WUs / day

Average MB WU: 46 credits
Average AP WU: 750 credits

15,226,368 MB credits/day * replication factor 2.08 = 31,670,845.44
6,186,750 AP credits/day * replication factor 2.08 * AP AC adjustment factor 1.25 = 16,085,550

MB workload: 31,670,845.44 MB credits / day
AP workload: 16,085,550 credits / day

SETI@Home Average Credit Needed / day = 47,756,395.44
SETI@Home RAC (08-28-2008) = 45,648,330.00
Which is 95.6% of total needed

Peak Credit Per Day Needed
--------------------------------------------------------------------------------
151,793 MB WUs / hour * 9.5 hours/day = 1,442,033.5 MB WUs / day
(1,442,033.5/256) * 6.38 = 35,938 AP WUs / day

Average MB WU: 46 credits
Average AP WU: 750 credits

66,333,541 MB credits/day * replication factor 2.08 = 137,973,765.28
26,953,500 AP credits/day * replication factor 2.08 * AP AC adjustment factor 1.25 = 70,079,100

MB workload: 137,973,765.28 credits / day
AP workload: 70,079,100 credits / day

SETI@Home Peak Credit Needed / day = 208,052,865.28
SETI@Home RAC (08-28-2008) = 45,648,330.00
Which is 22% of total needed

--------------------------------------------------------------------------------

In summary, we are running at 22% of the capacity required to process ALL of SETI@Home's workload as fast as new data is normally sampled. Of this workload, Astropulse represents about 34%. However, SETI@Home currently has 95.6% of the computing power it needs over a sustained period of time.

Ingleside
Volunteer developer
Send message
Joined: 4 Feb 03
Posts: 1546
Credit: 4,333,026
RAC: 1,085
Norway
Message 803301 - Posted: 30 Aug 2008, 1:54:11 UTC - in response to Message 803193.

The 6.38 multiplier is for a group of MB WUs, data for each group of 256 MB Wus can also produce 6.38 AP WUs. So to calculate from individual MB WUs it's 6.38/256 ~= 0.025. Then 331,008 MB WUs / day gives about 8275 AP WUs / day.

Ah, this gives a markedly different result...

But, there's still something that doesn't seem to add-up with the numbers...

If uses on average 2.1 results/wu, this means:
331008 SETI-wu's/day gives 695117 tasks/day. With 365000 bytes/task, this means... 23.5 Mbit/s download-usage.
8275 Astropulse-wu's/day gives 17378 tasks/day. With 8 million bytes/task, this means... 12.8 Mbit/s download-usage.

But, if looks on cricket-graph, outgoing bandwidth was at around 20 Mbit/s in May 2007, and has steadily increased to around 45 Mbit/s in July, before the release of Astropulse. Now, this is by no means exact, since application-downloads and some other traffic also shows-up on the graph. Still, the usage in July is roughly 2x that would be needed to run 700k tasks/day...

If looks on Scarecrows-graphs, September 2007 (1st. with full 'wait purge') shows on average 323396 wu in purge-queue, and 702442 results. If there's a 24-hours wait before purging, these numbers is a good indication about how many wu/day and results/day. Also, it shows on average 8.449 result/second generated, something that means on average 729993 results/day.

For later months, the lowest was November 2007, with 351k wu and 779k results in purger-queue, and generated 763k result/day.

For June 2008, it's increased to 553840 wu, 1164483 results, 1191560 generated/day, and based on returned result/hour, 1187016 result/day was returned. Also, these numbers indicates on average used 39 Mbit/s download-rate.

July 2008 gives a large spike, with 767670 wu, 1.6 M results, 1.25 M creation and 1.25 M returned. Since results/purged is much higher than generated and returned, it can indicate a batch of "bad" wu's or something, so wouldn't put too much meaning to these results.

If looks on 1st. half of 2008, the averages is 515k wu, 1.1 M result, 1.1 M creation, and 1.1 M returned, and 37 Mbit/s. Also worth mentioning, on average 2.14 results/wu. While no idea how accurate the graphs really are, the numbers seems to fit with eachother, and seems also to be fairly close to bandwidth-usage indicated on Cricket-graph...

So, if 331k wu/day recorded is accurate, atleast for me it can look like SETI@home has been crunching roughly 50% more wu/day than the record-capasity...

How the addition of Astropulse will influence things is more unsure, since no idea how many "short" SETI-wu's there is on average...

____________
"I make so many mistakes. But then just think of all the mistakes I don't make, although I might."

Message boards : Number crunching : How much RAC does SETI@home need for pseudo-realtime analysis?

Copyright © 2014 University of California