Posts by Stubbles

1) Message boards : Number crunching : GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU (Message 1827650)
Posted 31 Oct 2016 by Profile Stubbles
Post:
Haven't processed anywhere near 100 on CPU. Mine are processing just like a normal BLC VLAR on the CPU mostly. Maybe 30 minutes faster, so instead of 2 hours I've seen a lot process in 90 minutes or so. They process about the same as a GUPPI on the GPU, around 8 minutes or so. Two up on the GPU's.

I just find that a sample of less than 50 tasks on a device (CPU or GPU) is usually not enough, since from experience, whenever I reported AVGs of a sample of 30-50 tasks, it was often not very close to the longterm AVG.
Maybe a sample of 100 tasks is too much but 50 is definitely not enough for a new set of tasks.

If anyone is running BoincTasks, you should be able to easily import your History.cvs into a spreadsheet.
Then, it's just a matter of using the Filter to select only the MESSIER031s on CPU or GPU.
2) Message boards : Number crunching : GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU (Message 1827649)
Posted 31 Oct 2016 by Profile Stubbles
Post:
They process about the same as a GUPPI on the GPU, around 8 minutes or so. Two up on the GPU's.

I run 1 GPU WU at a time, and the current non VLAR Guppies are processing a couple of minutes faster (no less than 2, sometimes almost 4) than the VLAR ones do.

Hey Grant,
Glad to see you've become an optimiser :-D
...even though you're still reporting guppis on GPU :-p
R ;-)
3) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1827643)
Posted 31 Oct 2016 by Profile Stubbles
Post:
Hey Shaggie!

I SOooo love to see the updates to GPU outputs. Great work as always!

Have you given any thought about putting that talent of yours towards promoting the importance of optimization with Lunatics v0.45 and MrK's prog?
...since running stock is very inefficient (especially because of the CPU stock app)!

My GTX1060 and my 2 GTX750Ti seem to have a 30%-45% better throughput than what I would do with stock...and that's without mentioning the almost 100% improvement on the CPU tasks with the Lunatics CPU app for my CPUs (Xeon W3550).

The way I see it: by improving my throughput I'm improving the project's overall throughput
...and if more SETIzens could see that in some charts, that would be visual data worth acting upon with what they already have.

I think your current charts are still incredible at showing the better buys since the electricity consumption is likely the greatest cost for dedicated crunchers. But optimization seems to me so important that I think it worthwhile to mention it to you again.

Just me submitting my wish list...again...for the greater good (aka throughput)! ;-}

Cheers,
RobG :-D
4) Message boards : Number crunching : GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU (Message 1827624)
Posted 31 Oct 2016 by Profile Stubbles
Post:
Hey fellow Optimizers!
Who's doing what with the Andromeda Galaxy tasks (aka Messier031)?
From the few I've processed on my 2 rigs (cuz I still keep a cache of close to 1000 tasks for my tests and script devs), they seem to be even better on the CPU than regular Guppis (blc...guppi...vlars), and just as bad on my nVidia GPUs (2*750Ti & 1 GTX1060).

On my 2 Xeon W3550s, the MESSIERs are all running under 2.5hrs while the regular guppis can sometimes take up to 3hrs. I haven't processed enough for good solid stats yet though.
Has anyone processed at least 100 on CPU cores to confirm my preliminary observation?

Cheers,
RobG :-)
5) Message boards : Number crunching : GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU (Message 1817068)
Posted 14 Sep 2016 by Profile Stubbles
Post:
Thanks Jeff for all the great info there.

As far as not hearing about it, well, I did bring it up in your original rescheduling thread as being something that anyone working on a rescheduling application should take into consideration.
WoW your memory is great!
I was fairly green back then (and I still am!) and barely understood that is was not a big issue for a prototype since it just meant that a reassignment of tasks would have seemed to have never occured.
It definitely needs to be added to Jim's QOpt before v1.0 to cover all scenarios.

Anything else that Jimbocous and his Beta testers should know or consider?

Rob :-)
6) Message boards : Number crunching : GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU (Message 1817050)
Posted 14 Sep 2016 by Profile Stubbles
Post:
As I mentionned in my previous post:
1. the instance where sometimes GuppiR reports something like:
~ 0 tasks moved to GPU and 0 tasks moved to CPU ~
when there is definately something to move...and in my experience, sometimes hundreds of tasks.

I have come across the GuppiR message: 0 moved... and 0 ...moved
many times.
Up until now, I was using MrK's GuppiR in a way that I considered "out-of-scope" since:
with my semi-automated pre-MrK script, I can stash more tasks than S@h server allows but without breaking the Boinc Client's 1,000 task limit per project.
This give me a <4-day stash with 1,000 tasks and it allows me to be able to measure tweaks to my setups in a quicker way (since my the tasks processed during the next 24hrs are older and therefore get validated more quickly).

So back to my "out-of-scope" bug.
I sometimes get the nothing moved message even when I have hundreds of nonVLARs stashed in the CPU queue that should be moved to a GPU App queue.
Is it a real bug? Not until a CPU queue can exceed 100 tasks.
I'm just mentionning it because I read Keith's post:
... then at least put into the help file that the message can be expected as normal if there are minimal or no tasks to move.

and the output message is the same ...even if I suspect a different cause.

I will put off any hardcore testing of my "out-of-scope" bug until we hear back from MrK. But I will start logging when I came across the scenario again.
Cheers,
RobG
7) Message boards : Number crunching : GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU (Message 1817043)
Posted 14 Sep 2016 by Profile Stubbles
Post:
If the information message can't be removed entirely, then at least put into the help file that the message can be expected as normal if there are minimal or no tasks to move. As of now, it unnecessarily causes alarm to everyone who starts using the apps until they have more experience with them and likely have posted to the forum already why they are seeing the message and then receive the previous response from the authors Et al.


I think there are 2 issues (maybe bugs) here:
1. the instance where sometimes GuppiR reports something like:
~ 0 tasks moved to GPU and 0 tasks moved to CPU ~
when there is definately something to move...and in my experience, sometimes hundreds of tasks.
( I don't have the exact quote captured anywhere but I will report my "not-in-scope" experience in a seperate post after this one);

and

2. yoda51's "Error: could not determine CPU version_num from client_state. Nothing changed." From my experience, that is a bug and in this case it might be indirectly caused by yoda51's bloated app_info.xml

I wrote "might be indirectly" since MrK's GuppiR...exe works well in a seperate directory with just 2 files:
- client_state.xml and
- sched_request_setiathome.berkeley.edu.xml
(It could look for other files in the default Boinc directory that I would be unaware of from my little test).

I find it very interesting that yoda51 HAD a bloated: app_info.xml
and replacing it seems to have fixed the issue.
I suspect a seperate source for that issue than MrK's GuppiR since it doesn't modify that file.

@yoda51:
if you still have a copy of the bloated app_info.xml
then someone might be interested in looking at it.
(not me though since I'm still learning a lot about: client_state.xml )

R :-)
8) Message boards : Number crunching : GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU (Message 1817038)
Posted 14 Sep 2016 by Profile Stubbles
Post:

client_state_prev.xml seems to be a live-backup.
It gets overwritten with client_state.xml as soon as a change to
client_state.xml has succesfully been saved.
That way, if the computer crashes while client_state.xml is in the process of being saved, during the next reboot, Boinc will likely restart with client_state_prev.xml if client_state.xml is corrupt.

In 99% of scenarios, it seems to me that client_state.xml and
client_state_prev.xml will always be identical if not for that fraction of a second when Boinc overwrites client_state_prev.xml with client_state.xml

That's wrong information. The client_state.xml file never gets overwritten and always contains more current data than client_state_prev.xml. In close to 0% of scenarios those two will be identical. (Try running a file comparison utility against the two files.)

Each time the file is updated it is written out as client_state_next.xml. Once that file is successfully written and closed, the existing client_state_prev.xml file is deleted, client_state.xml is renamed as client_state_prev.xml and, finally, client_state_next.xml is renamed as client_state.xml.

Thanks for the clarification Jeff.
I'm surprised I never saw client_state_next.xml or heard of it.

( I don't see a reason for having 2 backups for a fraction of a second but oh well, there's probably be a good reason for this.
It might be a way to keep the client_state.xml file from getting highly fragmented on HDDs.
[edit] or its probably because it is much faster to simply rename a file than overwrite it with fresh content from RAM, and in effect shortening the time when something could go wrong. It would also ensure that if there is a client_state.xml then it is expected to be complete)[/e]

If I understand the full-picture correctly:
After Boinc has restarted, the client_state_prev.xml instance will get replaced fairly quickly.
(Do you know what the triggers are for the creation of a new client_state_next.xml ?)

The first time client_state_prev.xml gets replaced, it will be in effect a "better backup" since it will have the client_state.xml content from when Boinc wasn't running.
But as soon as there is a second replacement then there is no backup left from before the time when Boinc was restarted.
If that's the case, then client_state_prev.xml should not be relied on as a backup of client_state.xml

Am I missing something again?
9) Message boards : Number crunching : GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU (Message 1817032)
Posted 14 Sep 2016 by Profile Stubbles
Post:
But I assume that at some point Mr. Kevvy will address this ...

If he doesn't get the chance before you want to release your QOpt v1.0 front-end, you should probably just include in a text file the known issues/bugs to MrK's v0.51 since this thread is getting way too long for new users to be expected to find such info.

Just my 2cents,
RobG :-)
10) Message boards : Number crunching : GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU (Message 1817018)
Posted 14 Sep 2016 by Profile Stubbles
Post:
If not, you might try copying client_state_prev.xml, which is another backup, to client_state.xml.

... where client_state_prev.xml is written by the boinc client, presumably as some type of backup also.

client_state_prev.xml seems to be a live-backup.
It gets overwritten with client_state.xml as soon as a change to
client_state.xml has succesfully been saved.
That way, if the computer crashes while client_state.xml is in the process of being saved, during the next reboot, Boinc will likely restart with client_state_prev.xml if client_state.xml is corrupt.

In 99% of scenarios, it seems to me that client_state.xml and
client_state_prev.xml will always be identical if not for that fraction of a second when Boinc overwrites client_state_prev.xml with client_state.xml

Richard might have some better knowledge to share of the internal workings of Boinc since I am just going from what I am observing.

Cheers,
RobG
11) Message boards : Number crunching : GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU (Message 1817015)
Posted 14 Sep 2016 by Profile Stubbles
Post:
S@h AutoTaskSwap v0.2_beta1 is copying several time that file (copy /Y d:\ProgramData\BOINC\client_state.xml .\client_state-backup.xml>nul)
maybe a corruption occurs ?
Hello yoda51,
My frontend script doesn't modify any Boinc files. Only MrK's app does.
Mine only does a backup of client_state.xml in case something goes wrong.

i think i will restart seti project/resintall boinc after crunshing all workunit in place....
If you mean:
1. Stop DLing tasks with the Project Command button: "No new tasks";
2. process all the tasks already DLed and let them Upload and Report back; and then
3. press the Project Command button: "Reset project".
That is a good option and I don't understand why others are concerned.

Au plaisir,
Robert
12) Message boards : Number crunching : GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU (Message 1816738)
Posted 12 Sep 2016 by Profile Stubbles
Post:
HELLO to all readers of this thread!
I would like to know if people are using these apps to optimise their rig productivity and how satisfied they are with the results.
Can I ask you all to reply if you are using them and note if you are using both or just the one.
Stephen

The last rescheduler 0.51 from Rob and the latest Qopt 0.49 from Jim who is doing a fine job releasing a string of optimized updates it seems daily.

Since I made the v0.2 frontend to Mr Kevvy's GUPPI Rescheduler for Windows,
I'm very pleased to see my productivity increased by ~15% on my two rigs.
I'm surprised that there isn't more interest.
All have reported an increase of at least 10%, and some almost close to 20%.
I don't even think Lunatics can report such an increase (but it's been so long that I've used stock that I may be wrong).
If it wasn't for Mr Kevvy's app, I would have needed to figure out another way to transfer tasks and it wouldn't have been as simple, efficient and effective.
So thanks Mr Kevvy for the great app and thanks to Jim for taking over my frontend for Windows 7 to 10.

Maybe we should start a new thread after Jim releases his v1.0 (as a totally revamped and much improved version of my v0.2).

If there is more interest, I might put much more work in my proof-of-concept currently called: S@h-MicroMgr
It would bundle any S@h scripts, app and progs (that the creators allow me to bundle) in one Zip file to be extracted directly to the desktop.
The intent is to make it easy for those who don't feel comfortable with the Windows Command Prompt (formely known as MS-DOS).
If anyone is interested in taking a look at my proof-of-concept, aka S@h-McroMgr_v0.1alpha1
please send me a PM and I'll provide you a dropBox link.

Cheers,
RobG
13) Message boards : Number crunching : With two GTX 750 Ti, How do I do: Cuda50 4/gpu (1or2 GPU) and NV_SoG 2/gpu (max 1 GPU) (Message 1816039)
Posted 10 Sep 2016 by Profile Stubbles
Post:
Hello "Boinc client file" gurus
After reading one of Richard's post,
it got me wondowering about doing a similar scenario on my rig with two GTX 750 Ti.

I already have a semi-automated script to move tasks from CPU queue to any GPU app queue.
I tried editing app_info.xml to add parts of the Cuda50 setup that was in:
C:\ProgramData\BOINC\projects\setiathome.berkeley.edu\oldApp_backup\
but that failed and I guosted many tasks and screwed up my Lunatics NV_SoG setup.

So I reinstalled Lunatics with Cuda50, and right now, I'm only trying to modify app_config.xml in a similar way to another of Richard's post.
Unfortunately, the Boinc Wiki doesn't even mention some of the fields that Richard used: http://boinc.berkeley.edu/trac/wiki/ClientAppConfig

( FYI, with Lunatics v0.45 Beta4, I can move tasks with my script to any of the GPU <plan_class>. ex: <plan_class>cuda50</plan_class>
and it will only use the GPU app selected during the Lunatics setup no matter what the GPU <plan_class> is. )

Unfortunately, I'm not getting the desired GPU processing that I want:
- Cuda50 4/gpu (1or2 GPU), and
- NV_SoG 2/gpu (0or1 GPU)

I get the following:
- 8cuda/2gpu (Good!)
- 2SoG/2gpu (Bad: I want 2SoG/1gpu, and 4cuda/1gpu)
- 4cuda/1gpu, and 1SoG/1gpu (1st part is right but second should be 2Sog/1gpu)

Here's my: app_config.xml
<app_config>
	<app>
		<name>setiathome_v8</name>
		<gpu_versions>
			<gpu_usage>0.25</gpu_usage>
			<cpu_usage>0.25</cpu_usage>
		</gpu_versions>
	</app>
	<app_version>
		<app_name>setiathome_v8</app_name>
		<plan_class>opencl_nvidia_SoG</plan_class>
		<avg_ncpus>1.0</avg_ncpus>
		<ngpus>1.0</ngpus>
		<gpu_versions>
			<gpu_usage>0.50</gpu_usage>
			<cpu_usage>0.50</cpu_usage>
		</gpu_versions>
	</app_version>
</app_config>

note: I've removed the "astropulse_v7" section in order to make it shorter.

[edit]I'm guessing it has to do with:
<avg_ncpus>1.0</avg_ncpus> 
<ngpus>1.0</ngpus>
[/e]

Once I get the desired behaviour, I'll ask afterwards how to modify app_info.xml in order to use the following two GPU apps .exe
- cuda50: Lunatics_x41zi_win32_cuda50.exe
- NV_SoG: MB8_win_x86_SSE3_OpenCL_NV_SoG_r3500.exe

Any help is appreciated.
Cheers,
RobG :-)
14) Message boards : Number crunching : Open Beta test: SoG for NVidia, Lunatics v0.45 - Beta6 (RC again) (Message 1815693)
Posted 8 Sep 2016 by Profile Stubbles
Post:

Hi Songbird,
. . May I ask what you are running?? Stock? Lunatics? And CUDA or S0G? There are tweaks to release the CPU from the monopolistic grip of SoG if you want to.
. . For the GTX970 if you are running SoG, in the command line file in your BOINC data directory "Projects/Seti@Home" called "mb_cmdline_win_x86_SSE3_OpenCL_NV.txt" (there may be _S0G after the NV if you are running r3500) [you can edit it with a text editor like Notepad] add "-high_prec_timer -use_sleep" you will then find your CPUs are no longer wholly monopolised.
Stephen

I'm running lunatics 0.45 beta for SoG.(MB8_win_x86_SSE3_OpenCL_NV_SoG_r3500.exe). I did the command line thing. CPU dropped significantly. It's better but still nothing like the CUDA apps from before.
Thanks!

Hey Songbird,
Stephen will be able to help you with the specifics of the GTX970.
On my GTX750Ti: 4/gpu with cuda50 app is slightly better than 2/gpu with NV_SoG app (especially since there is no lag with cuda50).
On my GTX1060: 2/gpu with NV_SoG app is MUCH better than 4/gpu with cuda50 app.

My guess is: your GTX970 will be better off with 2tasks/gpu with NV_SoG ...if there is no lag and you need to use your rig at the same time

Keep in mind that all these differences are with running MrKevvy's app to optimize your CPU & GPU queues.
see: https://setiathome.berkeley.edu/forum_thread.php?id=79954

Cheers,
RobG
15) Message boards : Number crunching : Thought(s) on changing S@h limit of: 100tasks/mobo ...to a: ##/CPUcore (Message 1811914)
Posted 23 Aug 2016 by Profile Stubbles
Post:
I just reread my post and I forgot to consider these:

1. Is the 100/mobo limit programmed as a seperate variable/constant than the 100/gpu limit?

2. If the code is on the server side, is that a huge issue?
I'm guessing in could be part of the Boinc architecture code, or specific code for the S@h implementation.

3. Should I move this type of thread to the Beta forum?

Cheers,
RobG

[edit]I wrote this post witout having done a refresh on the thread...so I hadn't seen Richard's great post.
I should have mentionned that I wasn't concerned about differenciating at 1st the # of cores with or w/o HT enabled.
The cpu link in the OP already seems to know how many cores a rig has. Some Xeons even have double (sometimes triple) entries for rigs with HT off and on.

I hope someone remembers the detail of the staff post that I can't find :-/

This post is an offshoot of a post from earlier this month that ...hmmm...isn't my proudest moment.
I didn't plan on refering of linking to it...but I think it will be beneficial for the wider context.
You'll probably get the gist from reading my last post in the thread[/e]
16) Message boards : Number crunching : Philosophy: To CPU or NOT to CPU (Message 1811868)
Posted 23 Aug 2016 by Profile Stubbles
Post:
Hey folks,

For those who are pro-CPU, I started a new thread in order to remove an unresolved but related tangent from the thread: Panic Mode On (103) Server Problems?

Also, in the spirit of this thread's title, I entitled it:
Philosophy: To DeviceQueueOptimize or NOT (with a focus on: is it "micro managing"?)


"Keep Calm and carry Crunch On"
R ;-)
17) Message boards : Number crunching : GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU (Message 1811865)
Posted 23 Aug 2016 by Profile Stubbles
Post:
Hey A!

In Mr Kevvy's thread, please see his last post from 2 days ago:
https://setiathome.berkeley.edu/forum_thread.php?id=79954&postid=1810926

He writes at the end:
Happily my response time is now much faster.

so you might want to post a link to this thread in his thread (since he probably is "Subscribed" to his own thread and could get an auto-email sent when there is a new post...if he set his forum preferences that way).

Also, you could try sending him a private message (PM) in case he doesn't get an immediate email notification
(or even worse: he might not even be "subscribed" to his own thread ...cuz it's a forum bug, since you don't get automatically subscribed to your own thread!)

Hope that helps a bit,
RobG
18) Message boards : Number crunching : Philosophy: To DeviceQueueOptimize or NOT (with a focus on: is it "micro managing"?) (Message 1811859)
Posted 23 Aug 2016 by Profile Stubbles
Post:
Here is Grant's post in the last link provided in this thread's OP, with my reply below it.
I welcome your reply and hope this time we can come to a conclusion on "micro managing", especially if I'm still not understanding your perspective well.

There won't be any conclusion because you don't consider what you are doing to be micro management, when it would qualify as a text book example.
Just because you automate something doesn't disqualify it from being micro management.

From the Wikipedia,
Micromanagement
In business management, micromanagement is a management style whereby a manager closely observes or controls the work of subordinates or employees.

BOINC allows you to set your desired levels of CPU usage, network activity, resource share etc & then it just gets on with the job.
You find it necessary to shuffle work around, on top of what you have already set the manager to do.
ie a manager (yourself) closely observes or controls the work of subordinates or employees (BOINC).
As I said, a text book example.

I think I'm starting to see why we have a different perspective.
I'll try to use similar Business Management terminology.
Keep in mind that mine is mostly from experience in Canada's Federal Govt hierarchy with the following generic titles from bottom going up:

1. non-supervisor (the worker-bee)
2. supervisor
3. manager
4. director
5. DG (Director General)
6. ADM (Assistant Deputy Minister)
7+ ...irrelevant for the moment

For me:
6. the ADM is the BOINC commitee;
5. the DG is the Boinc architecture;
4. the Director is the S@h project;
3. the Managers are the SETIzens;
2. the Supervisors are the Boinc Managers (or BoincTasks or SETIspirit); and
1. the worker-bee is the Boinc host (PC) and the Boinc Client

As for an automated DeviceQueueOptimizer (such as MrKevvy's guppiRescheduler),
I place that at the #2 level alongside the Boinc Manager (or BoincTasks).
If it wasn't fully automated or prone to bugs, I would include that with #3 and I could consider that as a form of "micro managing".
But if it doesn't require any intervention during a whole week/month, I don't see how it is a form of "micro managing".

Does that help or change anything on your end?

With all due and intended respect,
RobG
19) Message boards : Number crunching : Panic Mode On (103) Server Problems? (Message 1811850)
Posted 23 Aug 2016 by Profile Stubbles
Post:
Grant & all:

I just started a new thread:

Philosophy: To DeviceQueueOptimize or NOT (with a focus on: is it "micro managing"?)

https://setiathome.berkeley.edu/forum_thread.php?id=80168

I welcome your reply and hope this time we can come to a conclusion on "micro managing", especially if I'm still not understanding your perspective well.

There won't be any conclusion because you don't consider what you are doing to be micro management, when it would qualify as a text book example.
Just because you automate something doesn't disqualify it from being micro management.


From the Wikipedia,
Micromanagement
In business management, micromanagement is a management style whereby a manager closely observes or controls the work of subordinates or employees.

BOINC allows you to set your desired levels of CPU usage, network activity, resource share etc & then it just gets on with the job.
You find it necessary to shuffle work around, on top of what you have already set the manager to do.
ie a manager (yourself) closely observes or controls the work of subordinates or employees (BOINC).
As I said, a text book example.
20) Message boards : Number crunching : Philosophy: To DeviceQueueOptimize or NOT (with a focus on: is it "micro managing"?) (Message 1811849)
Posted 23 Aug 2016 by Profile Stubbles
Post:
In order to remove an unresolved tangents from the thread: Panic Mode On (103) Server Problems?
and in the spirit of Stephen's new thread "Philosophy: To CPU or NOT to CPU",
this thread is to focus on the benifits of DeviceQueueOptimisation (such as Mr Kevvy's guppiRescheduler) and if it is to be considered as a form of "micro managing".

Please see the following posts (in another thread) for the debate's background:

Grant's post mentioning "micro managing":
https://setiathome.berkeley.edu/forum_thread.php?id=79575&postid=1811758

My 1st reply: https://setiathome.berkeley.edu/forum_thread.php?id=79575&postid=1811819

Grant's reply: https://setiathome.berkeley.edu/forum_thread.php?id=79575&postid=1811825

I'll take the time to write a concise reply (as I have a tendency to write way too much at times).
Until then, I invite anyone to comment.

Cheers,
RobG :-)


Next 20


 
©2025 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.