The Server Issues / Outages Thread - Panic Mode On! (118)

Message boards : Number crunching : The Server Issues / Outages Thread - Panic Mode On! (118)
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 18 · 19 · 20 · 21 · 22 · 23 · 24 . . . 94 · Next

AuthorMessage
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2026511 - Posted: 5 Jan 2020, 22:13:02 UTC - in response to Message 2026509.  
Last modified: 5 Jan 2020, 22:13:40 UTC



{Edit} - OK, the phantom of the fora strikes again, I wrote about the stalled downloads (which had reached into the hundreds) and now they are flowing as they should (more or less).


Thanks for posting about the stalled downloads.....My stalled ones finally finished :)


. . Glad to be of service ... 8^}

Stephen

. .
ID: 2026511 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 2026512 - Posted: 5 Jan 2020, 22:16:24 UTC
Last modified: 5 Jan 2020, 22:33:45 UTC

Data Distribution State	SETI@home v7 #	Astropulse #	SETI@home v8 #	As of*
Results ready to send	0	0	98	0m
Current result creation rate **	0/sec	0.0938/sec	7.0673/sec	5m


Do we have a problem?

<edit> Forget about the Current Creation Rate rises to 77/sec
ID: 2026512 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14649
Credit: 200,643,578
RAC: 874
United Kingdom
Message 2026515 - Posted: 5 Jan 2020, 22:57:54 UTC - in response to Message 2026500.  

Yes. SETI had a problem today, but it's now resolved. It'll take a little while for the resulting mess - congestion - to clear up, but it'll settle down in a day or two.

When a task completes, two things happen. First, a file containing the result data is uploaded. That was going on normally all day, with one file uploading for each task as soon as it was complete - you probably didn't even notice them.

Secondly, BOINC 'reports' the task - does the housekeeping to say how long it took, etc. These reports can be stacked up and transferred in bulk - in fact, it's more efficient to do it that way. The report can't be done unless the upload has already taken place. So if they've gone, you know that everything is complete.
ID: 2026515 · Report as offensive
Profile betreger Project Donor
Avatar

Send message
Joined: 29 Jun 99
Posts: 11358
Credit: 29,581,041
RAC: 66
United States
Message 2026538 - Posted: 6 Jan 2020, 1:28:23 UTC

Results ready to send 0 0 1,595
This is not good
ID: 2026538 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 2026545 - Posted: 6 Jan 2020, 1:56:58 UTC - in response to Message 2026538.  
Last modified: 6 Jan 2020, 1:58:43 UTC

Results ready to send 0 0 1,595
This is not good

Current result creation rate ** 0.0613/sec 60.2323/sec

To keep the caches filled we need something > 40/sec. So >60/sec is fine, the splitters are just working hard to fill the new cache limits of 1000`s of hungry hosts..

Unless something else changes all will be returning to normal in few hours.
ID: 2026545 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 2026562 - Posted: 6 Jan 2020, 4:41:27 UTC - in response to Message 2026545.  
Last modified: 6 Jan 2020, 4:46:43 UTC

Results ready to send 0 0 1,595
This is not good

Current result creation rate ** 0.0613/sec 60.2323/sec

To keep the caches filled we need something > 40/sec. So >60/sec is fine, the splitters are just working hard to fill the new cache limits of 1000`s of hungry hosts..
The problem is that it needs to sustain that output, and in reality, it can't. You might get bursts over 50, but the average output is often a lot less.
Ready to send is now down to 1,139.

Edit- big jump to 5,290.
And big jumps are usually the result of no work going out. Hopefully it was just a case of a big gap between updates.
Grant
Darwin NT
ID: 2026562 · Report as offensive
Profile Unixchick Project Donor
Avatar

Send message
Joined: 5 Mar 12
Posts: 815
Credit: 2,361,516
RAC: 22
United States
Message 2026567 - Posted: 6 Jan 2020, 4:58:35 UTC

I tend to look at the out in the field number. lately it has been 7.3 million. It lets us know how big the hole to fill is. After the outage I saw it at 6.8 million...and now it is 7,208,377 , so the caches are starting to fill up again. The RTS should start to build up again once the "hole" is filled and it has done some assimilating, deleting and purging.
ID: 2026567 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 2026572 - Posted: 6 Jan 2020, 9:02:58 UTC

Instant timeout on some downloads again.
Grant
Darwin NT
ID: 2026572 · Report as offensive
gs
Volunteer tester
Avatar

Send message
Joined: 18 May 99
Posts: 45
Credit: 5,412,660
RAC: 8
Germany
Message 2026573 - Posted: 6 Jan 2020, 9:37:36 UTC - in response to Message 2024182.  

Old computer broke down. trying to get this set up on a new one. Wondering why nothing happened. Now I know. Thank you.


Same here. Waiting to receive new WUs.
Happy New Year to everybody!
ID: 2026573 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2026579 - Posted: 6 Jan 2020, 11:07:33 UTC - in response to Message 2026573.  

Old computer broke down. trying to get this set up on a new one. Wondering why nothing happened. Now I know. Thank you.


Same here. Waiting to receive new WUs.
Happy New Year to everybody!


. . May the bluebird of happiness ... etc ... etc ... etc

Stephen

:)
ID: 2026579 · Report as offensive
taslehoff

Send message
Joined: 28 Sep 02
Posts: 3
Credit: 2,938,934
RAC: 0
United Kingdom
Message 2026594 - Posted: 6 Jan 2020, 17:13:00 UTC - in response to Message 2026515.  

Thanks for the reply mate (you learn something new every day) :)
ID: 2026594 · Report as offensive
Sleepy
Volunteer tester
Avatar

Send message
Joined: 21 May 99
Posts: 219
Credit: 98,947,784
RAC: 28,360
Italy
Message 2026716 - Posted: 7 Jan 2020, 23:36:05 UTC

And we are back! :-)
ID: 2026716 · Report as offensive
Profile Jimbocous Project Donor
Volunteer tester
Avatar

Send message
Joined: 1 Apr 13
Posts: 1849
Credit: 268,616,081
RAC: 1,349
United States
Message 2026717 - Posted: 7 Jan 2020, 23:48:26 UTC

So far the smoothest recovery I've seen in quite a while. Was able to report all work immediately, and got a small download. Guess we'll see how it goes; hope my optimism doesn't jinx anything :)
ID: 2026717 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2026718 - Posted: 7 Jan 2020, 23:53:08 UTC - in response to Message 2026717.  

So far the smoothest recovery I've seen in quite a while. Was able to report all work immediately, and got a small download. Guess we'll see how it goes; hope my optimism doesn't jinx anything :)


. . If it does we'll just blame you :)

Stephen

:)
ID: 2026718 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2026724 - Posted: 8 Jan 2020, 0:29:36 UTC - in response to Message 2026717.  

So far the smoothest recovery I've seen in quite a while. Was able to report all work immediately, and got a small download. Guess we'll see how it goes; hope my optimism doesn't jinx anything :)

Don't know what your magic recipe is . . . I never have any kind of luck.
Tue 07 Jan 2020 04:27:57 PM PST | SETI@home | Scheduler request failed: HTTP internal server error
Tue 07 Jan 2020 04:27:57 PM PST | SETI@home | [sched_op] Deferring communication for 02:27:31
Tue 07 Jan 2020 04:27:57 PM PST | SETI@home | [sched_op] Reason: Scheduler request failed

Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2026724 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 2026725 - Posted: 8 Jan 2020, 0:32:42 UTC

Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | Project has no tasks available
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | Project requested delay of 303 seconds
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30466.0.21.44.190_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57863_HIP21594_0020.30508.818.21.44.19.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57863_HIP21594_0020.30508.818.21.44.18.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57863_HIP21594_0020.30508.818.21.44.1.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_56565_HIP21556_0016.30476.0.21.44.189.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57547_HIP21488_0019.30417.409.22.45.218.vlar_0
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_56565_HIP21556_0016.30476.0.21.44.204.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30454.818.22.45.221.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_56565_HIP21556_0016.30476.0.21.44.196.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57863_HIP21594_0020.30508.818.21.44.16.vlar_0
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc55_2bit_guppi_58692_63351_HIP23319_0037.5834.818.22.45.36.vlar_2
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc22_2bit_guppi_58691_73716_HIP40209_0069.27070.818.21.44.173.vlar_2
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_56565_HIP21556_0016.30476.0.21.44.200.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57547_HIP21488_0019.30417.409.22.45.228.vlar_0
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_59155_HIP22762_0024.29719.409.21.44.87.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_59155_HIP22762_0024.29719.409.21.44.41.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30454.818.22.45.214.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30393.409.21.44.205.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57547_HIP21488_0019.30417.409.22.45.216.vlar_0
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30393.409.21.44.196.vlar_0
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_59155_HIP22762_0024.29719.409.21.44.65.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30427.0.22.45.205_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_56565_HIP21556_0016.30476.0.21.44.205.vlar_0
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57547_HIP21488_0019.30417.409.22.45.197.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_56565_HIP21556_0016.30476.0.21.44.194.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57863_HIP21594_0020.30508.818.21.44.20.vlar_0
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57547_HIP21488_0019.30417.409.22.45.213.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57231_HIP21594_0018.27735.818.22.45.121.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57231_HIP21594_0018.27735.818.22.45.119.vlar_0
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30454.818.22.45.198.vlar_0
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30393.409.21.44.225.vlar_0
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_59155_HIP22762_0024.29719.409.21.44.82.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc22_2bit_guppi_58691_74037_HIP40671_0070.28596.409.21.44.172.vlar_2
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58693_04093_HIP98677_0128.16158.818.22.45.205.vlar_3
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc15_2bit_guppi_58691_74697_HIP40118_0072.11167.818.22.45.28.vlar_2
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30454.818.22.45.212.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30427.0.22.45.199_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30427.0.22.45.188_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57231_HIP21594_0018.27735.818.22.45.115.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30454.818.22.45.202.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30466.0.21.44.199_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc22_2bit_guppi_58691_73716_HIP40209_0069.27070.818.21.44.171.vlar_2
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30466.0.21.44.195_0
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30427.0.22.45.206_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30454.818.22.45.222.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57547_HIP21488_0019.30417.409.22.45.221.vlar_1
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc22_2bit_guppi_58691_74037_HIP40671_0070.28596.409.21.44.203.vlar_2
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30466.0.21.44.181_0
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30466.0.21.44.207_0
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57547_HIP21488_0019.30417.409.22.45.206.vlar_0
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] Deferring communication for 00:05:03
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] Reason: requested by project


Its giving me all these "ack"s and I don't have a single Seti@Home task running right now. What are these again?

Tom
A proud member of the OFA (Old Farts Association).
ID: 2026725 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2026726 - Posted: 8 Jan 2020, 0:34:29 UTC

You were able to report your finished work.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2026726 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 2026727 - Posted: 8 Jan 2020, 0:35:29 UTC - in response to Message 2026726.  

You were able to report your finished work.


Ah, slaps forehead......
A proud member of the OFA (Old Farts Association).
ID: 2026727 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2026729 - Posted: 8 Jan 2020, 0:50:31 UTC - in response to Message 2026724.  

So far the smoothest recovery I've seen in quite a while. Was able to report all work immediately, and got a small download. Guess we'll see how it goes; hope my optimism doesn't jinx anything :)

Don't know what your magic recipe is . . . I never have any kind of luck.
Tue 07 Jan 2020 04:27:57 PM PST | SETI@home | Scheduler request failed: HTTP internal server error
Tue 07 Jan 2020 04:27:57 PM PST | SETI@home | [sched_op] Deferring communication for 02:27:31
Tue 07 Jan 2020 04:27:57 PM PST | SETI@home | [sched_op] Reason: Scheduler request failed


. . 3 Machines constant "No tasks available" while the 4th machine cannot report work. NO errors just no response at all, the requests go off into limbo ...

. . Time for break ...

Stephen

<shrug> and all after a 10 hour outage ... :(
ID: 2026729 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2026732 - Posted: 8 Jan 2020, 0:58:31 UTC

My assumption is that the stock machines with just a hundred or so tasks to report all hog the connections right after return. And force out the hosts that need to report thousands and make dozens of connections to report the 256 max tasks allowed. I never can make a connection for several hours after the project has returned.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2026732 · Report as offensive
Previous · 1 . . . 18 · 19 · 20 · 21 · 22 · 23 · 24 . . . 94 · Next

Message boards : Number crunching : The Server Issues / Outages Thread - Panic Mode On! (118)


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.