Let's Play CreditNew (Credit & RAC support thread)

Message boards : Number crunching : Let's Play CreditNew (Credit & RAC support thread)
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 13 · Next

AuthorMessage
Profile iwazaru
Volunteer tester
Avatar

Send message
Joined: 31 Oct 99
Posts: 173
Credit: 509,430
RAC: 0
Greece
Message 1936937 - Posted: 24 May 2018, 14:31:34 UTC - in response to Message 1936896.  
Last modified: 24 May 2018, 14:44:33 UTC

It would be interesting to see what the RAC derived FLOPS of a CPU only system, running stock, 24/7 for at least 2 months, comes out as compared to it's benchmark FLOPS rating.


It is very, very good. It is also one of the main reasons I created this thread. And you don't even have to wait a couple of months, just pick a task or two that aren't obvious outliers and it's easy to do the math for "per day" and "per month". If you want you can take a look at mine and factor in 6 tasks at a time... Or if patient you can wait for me to do it (here I mean, because I already have and know the results), I was going to anyway. Also our processors are nearly identical in performance (though mine is likely pulling only 20-25W) so any speed-up your 2600k is showing compared to my 7700HQ will be 90% because of Lunatics.

Your GPU calculations however could be way, way off*... How many tasks are you running?

*Edit: Ah sorry, I just saw you've got more than one GPU.

- - - - - - -
@Raistmer... now when you say "obviously", obvious to who exactly? :D
Don't know if you remember anyone ever telling you "I don't understand black & white windows" but that was me :)

But seriously, thank you so much for finding the time to reply. I tried to find you an Eric quote before going to bed last night (and half asleep) but came up empty... I did find one where he stated the obvious i.e. if for example optimized apps are twice as fast as stock and stock gets those optimizations then... the optimized apps wont be twice as fast as stock anymore :) Which isn't really any help...

I'll do some more digging and also try and find some time for a "real" reply to your post.
Thanx again.
ID: 1936937 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22182
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1936970 - Posted: 24 May 2018, 18:35:40 UTC
Last modified: 24 May 2018, 18:40:43 UTC

As the calibration part of the system runs continuously, and reacts continuously and is dependent not only on your host, but also the wingmen's hosts it is perfectly possible to get a very good result one day on a couple of tasks, but a few days later get a really bad correlation - hence the advise to run for a couple of months to get a wide range of comparators, you also need to log and plot every single task run, not just the handful that you happen to see on a given day.
I've been collecting data on one of my crunchers for about a month now, and the variation sitting at about +/- 35% from expected, which is "pretty poor". The cruncher is ONLY running stock apps, has NO GPU, is a four core Intel i3, but I'm only using two of the cores to try and keep temperatures reasonable.

[edit]
I've just noticed that your only computer has an Intel GPU - using this can have quite an impact on the performance of the CPU due to the way the two share quite a number of internal resources, so if yo are serious about the sort of analysis you are look to do turn it off (for anything to do with BOINC and SETI)
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1936970 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1936974 - Posted: 24 May 2018, 18:45:57 UTC - in response to Message 1936970.  

On the current show, you're getting exclusively SETI@home v8 v8.08 (alt) - which I think is an optimised wolf in sheep's clothing. We probably ought to look for a v8.00 for Windows/x86 to compare it with - I suspect that's what CN would choose as 'stock' for normalisation purposes.
ID: 1936974 · Report as offensive
Profile iwazaru
Volunteer tester
Avatar

Send message
Joined: 31 Oct 99
Posts: 173
Credit: 509,430
RAC: 0
Greece
Message 1936988 - Posted: 24 May 2018, 21:40:21 UTC - in response to Message 1936859.  

I would say it's for quite small children.

She's a couple of months away from 2yrs old and if PhotoBucket hadn't gone crazy I'd be spamming the café with pics :)
She's just about old enough to point at the TV and shout "Masha!"



Thanx, I'll check it out :)

- - - - - -
Regarding CreditScrew and optimization discourage:

Consider stock app and opt one. Opt provides better performance (mostly due to memory access patterns that can't be measured in FLOPs btw). So, opt hosts do same work faster and earn more credits.
But at one point stock implements same optimizations.... and recalibration occurs. FLOPs are the same. stock app process same work on same hardware faster... so stock recalibrated to get same credits as before.
What will be with opt hosts now?,...


I'm not sure I understand exactly what you are saying. I think you are saying:
"Since the FLOPs are the same, tasks get the same amount of credit as they did when the stock app was slower".
Isn't that exactly what we want?
ID: 1936988 · Report as offensive
Profile iwazaru
Volunteer tester
Avatar

Send message
Joined: 31 Oct 99
Posts: 173
Credit: 509,430
RAC: 0
Greece
Message 1936994 - Posted: 24 May 2018, 22:32:06 UTC - in response to Message 1936970.  

I've just noticed that your only computer has an Intel GPU - using this can have quite an impact on the performance of the CPU due to the way...


Yeah that sure was a disappointment! I tried all sorts of stuff for the first month... GPU only, CPU only (2 cores only, 8 cores only), iGPU only... and the poor iGPU performed remarkably well considering it's likely drawing a lot less than 10W... But once I tried using everything together - and remember this is a laptop so the throttling is even more aggressive - I was forced to kick the iGPU out of the mix. What a shame it works that way... I'm now glad I didn't get one of the new Ryzen laptops with a 2500U.

But yeah, I'll be creating a thread for this laptop when I'm ready for anon apps and that's when I'll need all the help I can get from you guys :)

...hence the advise to run for a couple of months to get a wide range of comparators, you also need to log and plot every single task run, not just the handful that you happen to see on a given day....


Not really. I was crunching stock V6 under CreditNew 24/7 for maybe a year and credit was pretty much rock solid*. So there is zero chance CN makes credit swing 35% by "default". Something has obviously changed since then. One of the things that has changed is the lumping of Arecibo & Green Back together as we mentioned already but even without looking I'm going to say 35% is just way too high. Either it's a lot less or some other really nasty change has been made in the past couple of years.

*... and then one day my Cuda times doubled. Just like that. I never really used the Boards up until those days but I thought half my ION burned out or something and came in begging for help. And thank God (just an expression) that Richard happened to see my post because I'm 99.9% sure he was the only person in all of Setiland capable of guessing the problem. Some stupid nVidia dll or something that couldn't be changed... so I was "forced" to use Lunatics. And got used to frequenting the Boards instead of just reading Matt's Technical Updates (way back when he used to post).
And now you guys are stuck with me :P
ID: 1936994 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13727
Credit: 208,696,464
RAC: 304
Australia
Message 1936995 - Posted: 24 May 2018, 22:37:24 UTC - in response to Message 1936988.  
Last modified: 24 May 2018, 22:41:12 UTC

I'm not sure I understand exactly what you are saying. I think you are saying:
"Since the FLOPs are the same, tasks get the same amount of credit as they did when the stock app was slower".
Isn't that exactly what we want?

No.
What we want, is Credit given that matches the definition of a Cobblestone.
Replacing an un-optimised application with an optimised one for stock shouldn't affect the amount of Credit given for a given WU, but it does.
Grant
Darwin NT
ID: 1936995 · Report as offensive
Profile iwazaru
Volunteer tester
Avatar

Send message
Joined: 31 Oct 99
Posts: 173
Credit: 509,430
RAC: 0
Greece
Message 1936999 - Posted: 24 May 2018, 23:04:41 UTC - in response to Message 1936995.  

...but it does.


I'm pretty sure raistmer just said it doesn't.

(But not 100% sure that's what he meant, which is why I asked)
ID: 1936999 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13727
Credit: 208,696,464
RAC: 304
Australia
Message 1937022 - Posted: 25 May 2018, 5:20:03 UTC - in response to Message 1936999.  
Last modified: 25 May 2018, 5:41:48 UTC

I'm pretty sure raistmer just said it doesn't.

The way I read it he said it did.

But at one point stock implements same optimizations.... and recalibration occurs. FLOPs are the same. stock app process same work on same hardware faster... so stock recalibrated to get same credits as before.

Same work faster, the host will claim more Credit. Amount of Credit for a given WU should be the same as before. But since the whole mechanism doesn't work as it should, the end result is we end up getting less Credit than before. Over time, that has resulted in us getting much less Credit than we should be getting if we were paid according the definition of a Cobblestone.

Ref- see my post comparing what I've getting, to what i should get if my system was only 5% efficient (I expect it's actually a bit better than that).
Grant
Darwin NT
ID: 1937022 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22182
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1937025 - Posted: 25 May 2018, 6:42:54 UTC

One issue, already identified in this thread, is that while the stock (8.00) and stock (8.08) implement exactly the same algorithm they do so in somewhat different manners. As far as I can glean 8.00 does not use all the SIMD bits of the CPU which 8.08 does, thus 8.00 actually does more "FLOPs" to complete a given task, so, by the definition of Cobblestones should get more credit. (Put it simply - SMID is a set of chip-level optimised instructions which can be turned on or off within an application, and different CPU types have different sub-set of the whole, expanding, set.)

Moving on to the optimised applications, it gets a little bit more confusing. If one is running Windows there is a very handy installer which allows you to choose exactly what level of SMID you run. On a given chip, assuming for now it actually has all the different SMID levels built in, running a given set of tasks there will be a fair difference in FLOPs per task when running the slowest (most FLOPs) and fastest (least FLOPs) of the SMID sets.

And that's only one type of optimisation for a given algorithm.

Raistmer gave an example a short time ago of another type - code changes to implement the same algorithm in a more effective manner - Here the programmer alters the code to do the same job but in a more effective manner (it is faster to increment two variables in a single step than in two steps) - Such changes can be highly compiler dependent as some compilers do a lot of this sort of thing for us.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1937025 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1937028 - Posted: 25 May 2018, 7:49:24 UTC - in response to Message 1937025.  
Last modified: 25 May 2018, 8:06:21 UTC

One issue, already identified in this thread, is that while the stock (8.00) and stock (8.08) implement exactly the same algorithm they do so in somewhat different manners. As far as I can glean 8.00 does not use all the SIMD bits of the CPU which 8.08 does, thus 8.00 actually does more "FLOPs" to complete a given task, so, by the definition of Cobblestones should get more credit. (Put it simply - SMID is a set of chip-level optimised instructions which can be turned on or off within an application, and different CPU types have different sub-set of the whole, expanding, set.)
That opens a whole new can of worms.

SIMD = 'Single Instruction, Multiple Data'.

Say we're multiplying a bunch of numbers together. An old-fashioned CPU, or a modern CPU running old-fashioned code, will run four separate multiply instructions to multiply four separate pairs of numbers. The modern code (running by necessity on a modern CPU) will issue a single instruction, but give it all four pairs of numbers at the same time - same result, but in fewer clock cycles and hence quicker.

So, what's our definition of a FLOP? Is it the number of CPU instructions executed (fewer), or the number of pairs of numbers multiplied together (exactly the same)? I'll need to go and find some proper references, but my gut feeling is that it should be the second.

Edit - still searching, but everyone so far has skirted round my question. First question arising - is a double-precision flop worth the same as a single-precision flop, in cobblestone terms?
ID: 1937028 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1937030 - Posted: 25 May 2018, 7:53:51 UTC - in response to Message 1936988.  
Last modified: 25 May 2018, 8:25:16 UTC


I'm not sure I understand exactly what you are saying. I think you are saying:
"Since the FLOPs are the same, tasks get the same amount of credit as they did when the stock app was slower".
Isn't that exactly what we want?


Well, same work (to analyse N seconds of radio signal) done faster, time and efforts put in optimization... just to get same credits.
More, if stock gets same credits, all opt hosts will get lower credits now and not because they became slower, just because stock became faster.
And then we attempt to compare SETI credits where optimization goes since project founding with credits of other projects where computations done in virtual machines (for example) and computation speed isn't priority at all.

So, if aim is to analyse radio-signal faster - yes, of course stock optimization is exactly what we want. But credits will not reflect advance in achieving that goal.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1937030 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13727
Credit: 208,696,464
RAC: 304
Australia
Message 1937031 - Posted: 25 May 2018, 8:06:03 UTC - in response to Message 1937030.  

So, if aim is to analyse radio-signal faster - yes, of course stock optimization is exactly what we want. But credits will not reflect advance in achieving that goal.

Not as things are with Credit New.

With Credit actually based on the definition of a Cobblestone, and Credit awarded for the FLOPS required for a given WU (ie FLOPS that would have to be performed if there were no optimisations), optimisation of the stock application would result in more Credit per hour.
Even with the stock application making use of optimisations, the credit for any give WU would still be the same, but the improved runtimes would result in more WUs being processed, and so more Credit being earned.
Grant
Darwin NT
ID: 1937031 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1937032 - Posted: 25 May 2018, 8:16:44 UTC - in response to Message 1937028.  
Last modified: 25 May 2018, 8:20:43 UTC


So, what's our definition of a FLOP? Is it the number of CPU instructions executed (fewer), or the number of pairs of numbers multiplied together (exactly the same)? I'll need to go and find some proper references, but my gut feeling is that it should be the second.


IMHO, second.
So, going SIMD will not change FLOPs number.

Also, it will not nessesarily improve speed(!) It depends on implementation of particular instruction on particular device.
And there were at least 2 examples already when going next level SIMD actually slowed things down (!).
SSE3 on Venice, AVX on first generation of AMD CPUs that supported it.
It can sound weird, why to implement SIMD then at all one could say, but it has own understandable reasons.
Competitor (Intel) extended instruction set (for computation speed improvement od course). AMD was not able to implement those instructions as effectively as did Intel. But if theywould just leave out implementations - all software that uses those instruction would just fail to run on AMD chips. So, they implemented corresponding SIMD levels as fast as they could at that moment for compatibility reasons. Their single SSE3 horisontal addition took let say (for example, not real number) 8 CPU ticks while Intel did same operation for let say 6 ticks.
Eventually microcode implementation improved and SSE3 HADD became faster then 4 scalar ones indeed.

What all this mean for FLOPs/credit issue? We can't just add fixed coefficient to account for SIMD usage versus scalar operations (if we want not just count FLOPs but estimate WORK done by device for given time)cause SIMD implementation wildly differs in speed.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1937032 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1937033 - Posted: 25 May 2018, 8:30:13 UTC - in response to Message 1937030.  
Last modified: 25 May 2018, 8:58:53 UTC


Well, same work (to analyse N seconds of radio signal) done faster, time and efforts put in optimization... just to get same credits.

Actually, I think I was not precise here. Cause suspect that stock will have same RAC (!) as before. And RAC is credits per time (speed of credits earning) not amount of credits.
If CreditScrew does this indeed (and seems it does) with stock improvement we will observe REDUCE in credits paid for single same task (and cause stock now operates faster it does more tasks per day so RAC remains the same - that's renormalization I spoke about). Is it what we really wanted to get as pay-off for optimization? I would say no.

Actually this is fundamental difference between CreditScrew and FLOPs counting.
Both methods can't account for work done.
But in situation I describe they will act different. FLOPs counting will pay same credit per task, tasks per day increased, stock host will have bigger RAC (as one could infer from word "optimization"). Opt ones will get same as before (no affected), again as it should be in common sense.

CreditScrew instead will renormalize RAC completely hiding effects of stock optimization (optimizer pay-off ZERO) and reducing RAC for anonymous platform host (though they not changed at all - and that's negative pay-off for optimization).

So, from optimizer point of view FLOPs counting is much better (though inadequate too).
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1937033 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22182
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1937034 - Posted: 25 May 2018, 8:34:13 UTC

Edit - still searching, but everyone so far has skirted round my question. First question arising - is a double-precision flop worth the same as a single-precision flop, in cobblestone terms?

That's really a difficult question to answer in general - it all depends on the hardware upon which it is being executed - if the processor has a Double Precision Maths Unit (DPMU) for every core then the answer will be different to one with no DPMU, and different to one with a DPMU shared between the "logic" part of the CPU. And that's ignoring how well the compiler does its job.

This part of the debate really shows up nicely a weakness in the concept of the Cobblestone concept - "Not all FLOPs are created equal". Read any of the more recent references on the Whetstone (the basis of FLOPs and thus Cobblestones) and you will quickly come across phrases that can be reworded as "This metric only applies to an idealised, theoretical processor, but may be used as a guide for others. Buyer Beware".

When I worked in designing the control systems for massive chemical plant we could not rely on Whetstones to be even a "good guide" to the actual performance of a given processor on a given task, we had to do proper bench marking of the processor/task combination. I do recall plodding through the manual for an early DSP to work out exactly how many clock cycles a given loop was going to take - we were that tight on time in the overall process loop, for example was it faster to shift by y and subtract "x" to do a multiply by z, or just do the multiply by z??? (As you can imagine the answer depended on what the values of x, y and z were - and they were all long integer)


SMID - woops, I knew I got it wrong, and I was talking about SIMD as Richard pointed out (you should have seen some of the other incorrect acronyms I came up with...)
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1937034 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1937035 - Posted: 25 May 2018, 8:49:51 UTC - in response to Message 1937034.  
Last modified: 25 May 2018, 8:51:52 UTC

we had to do proper bench marking of the processor/task combination.

And that's the single robust method of speed comparison I know.
One should always compare quantities of same dimension. It's the first thing one should remember in physics (and not only).

In SETI application that means if one wants to compare processing speed (that is, work, not FLOPs, done per unit of time) one should separate AP with particular blanking and MB with particular AR.
Then comparison has chances to be correct.

Of course this nullifies the idea on inter-project comparison, but one could not have all and free of charge ;)
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1937035 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13727
Credit: 208,696,464
RAC: 304
Australia
Message 1937037 - Posted: 25 May 2018, 8:52:52 UTC

So, what's our definition of a FLOP? Is it the number of CPU instructions executed (fewer), or the number of pairs of numbers multiplied together (exactly the same)? I'll need to go and find some proper references, but my gut feeling is that it should be the second.

I'd say number 2.

Reading the original "How SETI@home works" it talks about the number of calculations required to process a WU, and why different AR WUs require a different number of calculations.
How a machine does them doesn't matter, nor does it matter if it only does 3/4 of them, or 1/4 of them- as long as the result is correct.
The fact is each WU would require that number of operations if it were to be processed manually. It would require that number of operations to be done when processed on the Cobblestone definition reference system.

What matters is that a particular WU would require a certain number of operations to be performed if there were no optimisations, short cuts or other operation minimisation used.
That maximum possible number of operations should be what is used in determining the work done. The reference machine in the Cobblestone definition would perform all of those operations in order to process that WU, and so that number of required FLOPs would give the amount of Credit due for that WU.
Grant
Darwin NT
ID: 1937037 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13727
Credit: 208,696,464
RAC: 304
Australia
Message 1937039 - Posted: 25 May 2018, 8:56:19 UTC - in response to Message 1937035.  
Last modified: 25 May 2018, 8:59:50 UTC

(that is, work, not FLOPs, done per unit of time)

What would you use to determine work done, other than FLOPs? As FLOPS is the metric used to determine arithmetic capability/work done on a computer.


EDIT-
From the wiki on FLOPS
Floating-point operations are typically used in fields such as scientific computational research. The unit MIPS measures integer performance of a computer. Examples of integer operation include data movement (A to B) or value testing (If A = B, then C). MIPS as a performance benchmark is adequate when a computer is used in database queries, word processing, spreadsheets, or to run multiple virtual operating systems.[3][4] Frank H. McMahon, of the Lawrence Livermore National Laboratory, invented the terms FLOPS and MFLOPS (megaFLOPS) so that he could compare the supercomputers of the day by the number of floating-point calculations they performed per second. This was much better than using the prevalent MIPS to compare computers as this statistic usually had little bearing on the arithmetic capability of the machine.

Grant
Darwin NT
ID: 1937039 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1937041 - Posted: 25 May 2018, 9:03:13 UTC

Found some food for thought.

Whetstone Benchmark History and Results
Linpack Benchmark Results On PCs

Two from the same collection, by the same author. There are others...
ID: 1937041 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22182
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1937042 - Posted: 25 May 2018, 9:11:24 UTC

In SETI application that means if one wants to compare processing speed (that is, work, not FLOPs, done per unit of time) one should separate AP with particular blanking and MB with particular AR.
Then comparison has chances to be correct.

I think we are getting to the same thing from different start places.

I would suggest:
Baseline a set of APs and MBs on a defined processor (even a "theoretical" one), decide how much that is worth per %blanked or per degree AR and scale each task to get its "value". Ignore the differences in processor and application (apart from those needed in the validation process) as those are user choices not project decisions.
This way there is time-consistency in that a user will know that an x% blanked AR will have a value of y, and an M-degree MB will have a value of n. Independent of the processor or application.

Think about it - Today even running the same task, but with a different pair of validating crunchers will probably get you a different value, and that's what gets most peoples backs up.

As to the argument about cross-project comparability in value per task - that doesn't exist today, and may never exist, and that is something I think we have to live with. The truth is many project do not use the fully adaptive scoring system that SETI does, the majority appear to use either a simple fixed, or a more complex time based scoring system. I wouldn't be surprised if one or two actually use a variation on the one I've outlined above.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1937042 · Report as offensive
Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 13 · Next

Message boards : Number crunching : Let's Play CreditNew (Credit & RAC support thread)


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.