The Server Issues / Outages Thread - Panic Mode On! (119)

Message boards : Number crunching : The Server Issues / Outages Thread - Panic Mode On! (119)
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 27 · 28 · 29 · 30 · 31 · 32 · 33 . . . 108 · Next

AuthorMessage
Grumpy Swede
Volunteer tester
Avatar

Send message
Joined: 1 Nov 08
Posts: 8170
Credit: 49,849,242
RAC: 147
Sweden
Message 2044569 - Posted: 13 Apr 2020, 15:42:39 UTC - in response to Message 2044568.  
Last modified: 13 Apr 2020, 15:45:26 UTC

watching the mind melting rage of those who think this is cheating, and do not even understand the significance of the number 1337, is great entertainment for my morning coffee.
i can say with certainty, the GPU number you are seeing is a label only, and in no way affects how many tasks he's getting.


. . Puts hand up ...

. . I'll take a wild guess, Copernicus?

Stephen

? ?

Ah well, I could have said something about cheating, and Obsessive Crunching Disorder, but I won't, this time...... :-)

But I will say to the spoofers: Spare us from hackers lingo please....
ID: 2044569 · Report as offensive     Reply Quote
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5384
Credit: 192,787,363
RAC: 1,426
Australia
Message 2044568 - Posted: 13 Apr 2020, 15:38:53 UTC - in response to Message 2044525.  

watching the mind melting rage of those who think this is cheating, and do not even understand the significance of the number 1337, is great entertainment for my morning coffee.
i can say with certainty, the GPU number you are seeing is a label only, and in no way affects how many tasks he's getting.


. . Puts hand up ...

. . I'll take a wild guess, Copernicus?

Stephen

? ?
ID: 2044568 · Report as offensive     Reply Quote
Profile Siran d'Vel'nahr
Volunteer tester
Avatar

Send message
Joined: 23 May 99
Posts: 7346
Credit: 44,181,323
RAC: 540
United States
Message 2044567 - Posted: 13 Apr 2020, 15:37:59 UTC - in response to Message 2044556.  

Hi Juan,
I would not agree with that statement. That would not be fair to those of us that do NOT spoof (cheat) as Ville Saari does. He has over 70K STILL IN PROGRESS! Why allow him to get ALL the resends and not let any through to anyone else. I still have BOINC set to get anything I can from SETI, be it GPU or CPU tasks. If only active hosts are allowed to get resends, than as I said, it is not fair to those of us that do not cheat.
[edit] And, you cannot tell me that Ville Saari is not cheating by spoofing. One of his PCs is said to have 1337 GPUs!!!!!!!!!! He is CHEATING!!!!!!! [/edit]
[edit2] The last time I looked at his PC list that PC said it had 64 GPUs. He increased the spoofing variable value (or whatever it is called in the software) just so he could get the lions share of any tasks ready to send. [/edit2]
Have a great day! :)
Siran


. . I think there are some translation issues here. I think what Juan was trying to say is that hosts that are still active and contacting the servers regularly AND returning any work assigned to them promptly should be given the resends, rather than hosts that are sitting on large numbers of tasks and not returning them. I agree that still doesn't preclude hosts that have massive cached numbers if they are still returning work regularly but it is hard to cover all cases. Hence Richards call to those individuals to tend to their rigs. It is a moot point anyway because I do not foresee the Berkeley guys making such changes even at this stage of the project.

Stephen

:(

Hi Stephen,

Yep, Juan explained what he meant and I agree with him. :)

Have a great day! :)

Siran
CAPT Siran d'Vel'nahr XO - L L & P _\\//
USS Vre'kasht NCC-33187
Winders 10 OS? "What a piece of junk!" - L. Skywalker
"Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath
ID: 2044567 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9764
Credit: 572,710,851
RAC: 8,616
Panama
Message 2044564 - Posted: 13 Apr 2020, 15:36:35 UTC - in response to Message 2044559.  
Last modified: 13 Apr 2020, 15:46:37 UTC

So, then 64 is also a tag, and all the extra WU's for a spoofed "64" client is just a conspiracy, and the thousands of "extra" WU's on such a host does not exist??

Please not put wrong words on my mouth, i just say in my case, the 1999 is a simple tag. Not related to the number of WU/GPU's or the way the host works. Nothing more or less. I just changed my to 2020 for you to see. If you go to E@H and Rossetta you will see the tag there is still 1999
ID: 2044564 · Report as offensive     Reply Quote
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14114
Credit: 200,643,578
RAC: 1,983
United Kingdom
Message 2044563 - Posted: 13 Apr 2020, 15:36:17 UTC - in response to Message 2044546.  

Now it is crunching the WU with dateline of May 18 04:11:11 AM EST.
So it will report them more than a month in advance of the dateline!
So what is the problem with that?
I'm still waiting someone show me where this "hidden rule" who said a host can't has more than xxxx WU is write so i could read it.
One point of view is that the rules changed on 2 Mar 2020, when we were all given 4 weeks' notice that the project was switching from raw data processing to analysis. The extended "anyone can participate here - even the slowest tortoise can join the fun" deadlines became meaningless from that moment.

I, too, ran my computers up to the wire, and a little bit longer when the final tapes couldn't be split because the database was full. But my last remaining first-run task was reported complete at 2 Apr 2020, 20:01:50 UTC. I would have expected that we would all be participating in the clean-up by now, but instead we're debating (yet again) our freedoms to bend and interpret the rules however we like.

Please let's just agree to differ, finish whatever work we've got (without worrying how we got it), and wish Eric and David the best of luck with their analysis, report-writing, and publishing.
ID: 2044563 · Report as offensive     Reply Quote
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5384
Credit: 192,787,363
RAC: 1,426
Australia
Message 2044562 - Posted: 13 Apr 2020, 15:33:29 UTC - in response to Message 2044519.  

Agreed, they should cancel on the server side all tasks still in the field and re-send them with much smaller deadlines. It would mean lost computation for the slowest of slow hosts (and the cheaters who consider they deserve more than the rest of us), but it would allow the last pending tasks to complete much faster. Adding a drastic limit per host is also better indeed, and should always have been added on top of the limit for CPU work and per GPU.

+1

And spoofing up to 1337 GPUs!!!!!!!!!!, to be able to get as many tasks he wants, that' just ridicolous, and extreme CHEATING!!


. . And with the very low rate of resends available also meaningless. Even if he was reporting 2 million GPUs, he will not receive more tasks than are available, and that is small numbers indeed. Getting new work at this point is more like winning the lottery as has been remarked on several occasions. So one does have to wonder why he persists with such a pointless tactic at this time.

Stephen

< shrug >
ID: 2044562 · Report as offensive     Reply Quote
Profile Siran d'Vel'nahr
Volunteer tester
Avatar

Send message
Joined: 23 May 99
Posts: 7346
Credit: 44,181,323
RAC: 540
United States
Message 2044560 - Posted: 13 Apr 2020, 15:31:58 UTC - in response to Message 2044552.  


Yeah, I have gotten 6 tasks since April 1st. I have one host still connected to SETI. When this host communicates with SETI the scheduler always does a 30 minute back off. I have seen where the RTS shows a dew tasks. I manually have BOINC communicate and I get nothing for the next 30 minutes. The next time I look at the servers, there are zero tasks RTS.

Siran


Are you sure, I checked mine this morning, according to the "all tasks" list I had downloaded, processed returned and had validated 18 tasks in a single batch less than an hour previously.

When I rechecked about 2 hours later, THREE hours after the original download they had already been removed from my lists.

Resends are getting processed and deleted at a very fast rate.

Hi Kevin,

Yep, I got 1 yesterday, 2 on the 11th, 1 on the 9th and 2 on the 3rd. 6 since the 1st. I even decided to go for CPU tasks instead of just GPU since I was having problems with Rosetta constantly restarting it's tasks and always running in high priority. An administrator said I should just set NNT and do something else since it will take weeks to months for a fix for the app to work properly and set checkpoints. Others were having that same problem. So I added CPU tasks here in hopes to get more tasks.

Have a great day! :)

Siran
CAPT Siran d'Vel'nahr XO - L L & P _\\//
USS Vre'kasht NCC-33187
Winders 10 OS? "What a piece of junk!" - L. Skywalker
"Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath
ID: 2044560 · Report as offensive     Reply Quote
Grumpy Swede
Volunteer tester
Avatar

Send message
Joined: 1 Nov 08
Posts: 8170
Credit: 49,849,242
RAC: 147
Sweden
Message 2044559 - Posted: 13 Apr 2020, 15:31:15 UTC - in response to Message 2044557.  
Last modified: 13 Apr 2020, 15:39:28 UTC

. . I think there are some translation issues here. I think what Juan was trying to say is that hosts that are still active and contacting the servers regularly AND returning any work assigned to them promptly should be given the resends, rather than hosts that are sitting on large numbers of tasks and not returning them. I agree that still doesn't preclude hosts that have massive cached numbers if they are still returning work regularly but it is hard to cover all cases. Hence Richards call to those individuals to tend to their rigs. It is a moot point anyway because I do not foresee the Berkeley guys making such changes even at this stage of the project.

Stephen

:(

Thanks Stephen you get the meaning. yes i have some problems with the translations, you all know i'm not a native english speaker.

About the GPU number... It's just a simple TAG as posted by Ian, nothing else, to prove this point please tell me any number (integer greater than 0 of course and under 32000) and i will switch mine for you to see nothing changes on my host. So please stop with this conspiracy theory.

So, then 64 is also a tag, and all the extra WU's for a spoofed "64" client is just a conspiracy, and the thousands of "extra" WU's on such a host does not exist??

Geeze, how the cheaters try.......

Nah, the spoofers should have been banned immeditely when it was discovered that they areCHEATING.

Basta!!!
ID: 2044559 · Report as offensive     Reply Quote
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5384
Credit: 192,787,363
RAC: 1,426
Australia
Message 2044558 - Posted: 13 Apr 2020, 15:27:45 UTC - in response to Message 2044512.  

Agreed, they should cancel on the server side all tasks still in the field and re-send them with much smaller deadlines. It would mean lost computation for the slowest of slow hosts (and the cheaters who consider they deserve more than the rest of us), but it would allow the last pending tasks to complete much faster. Adding a drastic limit per host is also better indeed, and should always have been added on top of the limit for CPU work and per GPU.


. . Well not ALL tasks in the field. but maybe all those older than a week or thereabouts. So all hosts with tasks less than a week old would still have a day or 2 to clear them. As for an overall limit on task numbers, believe it or not but there is such a limit in place. But obviously some have found a way to circumvent it. S@H is not perfect :( But certainly forcing resends on all older tasks would expedite the process of cleaning up the project.

Stephen

. .
ID: 2044558 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9764
Credit: 572,710,851
RAC: 8,616
Panama
Message 2044557 - Posted: 13 Apr 2020, 15:27:00 UTC - in response to Message 2044556.  

. . I think there are some translation issues here. I think what Juan was trying to say is that hosts that are still active and contacting the servers regularly AND returning any work assigned to them promptly should be given the resends, rather than hosts that are sitting on large numbers of tasks and not returning them. I agree that still doesn't preclude hosts that have massive cached numbers if they are still returning work regularly but it is hard to cover all cases. Hence Richards call to those individuals to tend to their rigs. It is a moot point anyway because I do not foresee the Berkeley guys making such changes even at this stage of the project.

Stephen

:(

Thanks Stephen you get the meaning. yes i have some problems with the translations, you all know i'm not a native english speaker.

About the GPU number... It's just a simple TAG as posted by Ian, nothing else, to prove this point please tell me any number (integer greater than 0 of course and under 32000) and i will switch mine for you to see nothing changes on my host. So please stop with this conspiracy theory.
ID: 2044557 · Report as offensive     Reply Quote
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5384
Credit: 192,787,363
RAC: 1,426
Australia
Message 2044556 - Posted: 13 Apr 2020, 15:19:32 UTC - in response to Message 2044506.  

Hi Juan,
I would not agree with that statement. That would not be fair to those of us that do NOT spoof (cheat) as Ville Saari does. He has over 70K STILL IN PROGRESS! Why allow him to get ALL the resends and not let any through to anyone else. I still have BOINC set to get anything I can from SETI, be it GPU or CPU tasks. If only active hosts are allowed to get resends, than as I said, it is not fair to those of us that do not cheat.
[edit] And, you cannot tell me that Ville Saari is not cheating by spoofing. One of his PCs is said to have 1337 GPUs!!!!!!!!!! He is CHEATING!!!!!!! [/edit]
[edit2] The last time I looked at his PC list that PC said it had 64 GPUs. He increased the spoofing variable value (or whatever it is called in the software) just so he could get the lions share of any tasks ready to send. [/edit2]
Have a great day! :)
Siran


. . I think there are some translation issues here. I think what Juan was trying to say is that hosts that are still active and contacting the servers regularly AND returning any work assigned to them promptly should be given the resends, rather than hosts that are sitting on large numbers of tasks and not returning them. I agree that still doesn't preclude hosts that have massive cached numbers if they are still returning work regularly but it is hard to cover all cases. Hence Richards call to those individuals to tend to their rigs. It is a moot point anyway because I do not foresee the Berkeley guys making such changes even at this stage of the project.

Stephen

:(
ID: 2044556 · Report as offensive     Reply Quote
Profile Siran d'Vel'nahr
Volunteer tester
Avatar

Send message
Joined: 23 May 99
Posts: 7346
Credit: 44,181,323
RAC: 540
United States
Message 2044555 - Posted: 13 Apr 2020, 15:17:41 UTC - in response to Message 2044525.  
Last modified: 13 Apr 2020, 15:35:33 UTC

watching the mind melting rage of those who think this is cheating, and do not even understand the significance of the number 1337, is great entertainment for my morning coffee.

i can say with certainty, the GPU number you are seeing is a label only, and in no way affects how many tasks he's getting.

Hi Ian,

Sorry to disagree with you here, but I do. Case in point, my current host:
When I look at my host list it show [2] NVIDIA GPUs. Guess what, I HAVE 2 NVIDIA GPUs in this host. Now, SETI upped the amount of tasks per device. I was getting 150 CPU tasks and 300 GPU tasks. Before they increased it Before I added the second GPU I was getting 150. Let me whip out my calculator... 1337 x 150 = 200550 and 70000 / 150 = 467 (rounded up 1). So I disagree and say that he got more tasks because he cheated with the spoofing. And the GPU number does have a bearing on the number of GPU tasks one gets. Since SETI was putting out less tasks over the past month or more, he got a lions share of what they did put out to be sent. If he hadn't cheated, my host could probably still be crunching and not just 1 per day or whatever it is now since 4/1. I have gotten 6 since April 1st, he still has over 70K! If he hadn't raised the number from 64 to 1337, he would not have as many today.

Have a great day! :)

Siran
CAPT Siran d'Vel'nahr XO - L L & P _\\//
USS Vre'kasht NCC-33187
Winders 10 OS? "What a piece of junk!" - L. Skywalker
"Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath
ID: 2044555 · Report as offensive     Reply Quote
Kevin Olley

Send message
Joined: 3 Aug 99
Posts: 906
Credit: 261,085,289
RAC: 1,297
United Kingdom
Message 2044552 - Posted: 13 Apr 2020, 15:13:29 UTC - in response to Message 2044536.  


Yeah, I have gotten 6 tasks since April 1st. I have one host still connected to SETI. When this host communicates with SETI the scheduler always does a 30 minute back off. I have seen where the RTS shows a dew tasks. I manually have BOINC communicate and I get nothing for the next 30 minutes. The next time I look at the servers, there are zero tasks RTS.

Siran


Are you sure, I checked mine this morning, according to the "all tasks" list I had downloaded, processed returned and had validated 18 tasks in a single batch less than an hour previously.

When I rechecked about 2 hours later, THREE hours after the original download they had already been removed from my lists.

Resends are getting processed and deleted at a very fast rate.
Kevin


ID: 2044552 · Report as offensive     Reply Quote
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5384
Credit: 192,787,363
RAC: 1,426
Australia
Message 2044550 - Posted: 13 Apr 2020, 15:10:42 UTC - in response to Message 2044503.  

I Agree with Richard, bunkering is not a good option on this last days.
IMHO What is needed to do is a way to send the resends only to those hosts who are still actively crunching and returning their work in the last few days. Maybe a week or something similar. And if possible in a small batch of files only. Something like 10 WU max per host.
The results in the field are already distributed, but a lot will be expired and if they will be sended again on a non crunching hosts (IE a host with AV problem, bunker, etc) will be a long wait until it reaches the dateline again.
And BTW the dateline must be reduced to the minimum possible to expedite the crunch (by make the host entering in panic mode in case it's runs multiprojects). Something radical like 3 days for a GPU WU and 5-7 days for the CPU WU.
My 0.02


. . Again +1

Stephen

:)
ID: 2044550 · Report as offensive     Reply Quote
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5384
Credit: 192,787,363
RAC: 1,426
Australia
Message 2044549 - Posted: 13 Apr 2020, 15:08:57 UTC - in response to Message 2044497.  

Well, I think we can say the Eagle has landed.
Results received in last hour **	0	74	5,762	0m
Workunits waiting for validation	0	0	0	0m
Workunits waiting for assimilation	0	0	2	0m
Workunit files waiting for deletion	0	0	1	0m
Result files waiting for deletion	0	0	12	0m
Workunits waiting for db purging	0	60	5,498	0m
Results waiting for db purging		0	669	60,657	0m
[SSP as of 13 Apr 2020, 10:30:04 UTC]

The key one is 'assimilation' - that's been effectively zero for a while, while late results continue to trickle in. Like Grant, my personal Valid list shows all 'valid tasks with all workunit tasks reported' have been processed: those that are left are from the period at the end of March when extra replications were created, and some have not yet been returned.

So the message to all bunkerers is: "Thank you for keeping out of the way while the servers recovered from their overload. But that phase is now over. If any of your computers now shows tasks in progress on the web page, please check it and act accordingly."

* if you have switched to another project, please switch at least some resources back to SETI to help finish the run
* if you have the tasks, and the computer is idle, please restart it
* if you don't have the tasks - if they're ghosts - try fetching work at various times of day to see if you can recover them

If you don't take some sort of action like that, you're now part of the problem, rather than part of the project you claim to support.


+1
ID: 2044549 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9764
Credit: 572,710,851
RAC: 8,616
Panama
Message 2044546 - Posted: 13 Apr 2020, 15:04:20 UTC - in response to Message 2044536.  

since a host with over 50K tasks is cheating, or more precisely the owner of the host.

FYI my host runs in the past with a ultra large buffer 3x this figure and all the WU are crunched within the dead lines.
Now it is crunching the WU with dateline of May 18 04:11:11 AM EST.
So it will report them more than a month in advance of the dateline!
So what is the problem with that?
I'm still waiting someone show me where this "hidden rule" who said a host can't has more than xxxx WU is write so i could read it.
What is wrong is: if the host DL more WU than it's capacity to crunch them on time.
Live long & Prosper!
ID: 2044546 · Report as offensive     Reply Quote
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5384
Credit: 192,787,363
RAC: 1,426
Australia
Message 2044544 - Posted: 13 Apr 2020, 15:01:53 UTC - in response to Message 2044464.  

. . A very long time since neither of the two listed machines seems to have returned any results since April Fool's Day.

https://setiathome.berkeley.edu/results.php?hostid=8652081&offset=52360&show_names=0&state=1&appid= is no longer a valid link, because the host has returned a few hundred results in the last few hours.
At least one of those claims must be bullshit as they contradict each other. Try to get your facts right before blaming people.


. . Or the SSP changed between the times when each of us looked at it. It does change from time to time you know ...

Stephen

:)
ID: 2044544 · Report as offensive     Reply Quote
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 18643
Credit: 416,307,556
RAC: 863
United Kingdom
Message 2044543 - Posted: 13 Apr 2020, 15:01:45 UTC

Even better, and for no effort on behalf of the servers, he just detaches those two computers then deletes his SETI account. Doing so would show that he at least had some grain of thought for the rest of the SETI community.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 2044543 · Report as offensive     Reply Quote
Profile Siran d'Vel'nahr
Volunteer tester
Avatar

Send message
Joined: 23 May 99
Posts: 7346
Credit: 44,181,323
RAC: 540
United States
Message 2044536 - Posted: 13 Apr 2020, 14:43:59 UTC - in response to Message 2044523.  

My host is not actively crunching.

Your hosts show: Last contact 13 Apr 2020, 13:42:52 UTC
and the last WU reported was: 12 Apr 2020, 21:56:54 UTC
So is actively crunching in the last week at least.
By my suggestion it is a candidate to receive the resends.
Maybe a week or something similar

Hi Juan,

Yeah, I have gotten 6 tasks since April 1st. I have one host still connected to SETI. When this host communicates with SETI the scheduler always does a 30 minute back off. I have seen where the RTS shows a dew tasks. I manually have BOINC communicate and I get nothing for the next 30 minutes. The next time I look at the servers, there are zero tasks RTS.

How about the servers figuring out that a host that has over 50K tasks in progress, take a bunch of them and pass them out to hosts with none. That sounds fair to me, since a host with over 50K tasks is cheating, or more precisely the owner of the host. Yeah, I know, in the SETI software it checks and says that a host has met it's daily allotment. I have seen that on this host and yet not long afterwords the host will get more tasks. In my mind a host that has > 50K tasks has WAY more than a daily allotment.

Have a great day! :)

Siran
CAPT Siran d'Vel'nahr XO - L L & P _\\//
USS Vre'kasht NCC-33187
Winders 10 OS? "What a piece of junk!" - L. Skywalker
"Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath
ID: 2044536 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9764
Credit: 572,710,851
RAC: 8,616
Panama
Message 2044533 - Posted: 13 Apr 2020, 14:29:36 UTC - in response to Message 2044531.  
Last modified: 13 Apr 2020, 14:31:42 UTC

1337 = leet = elite

it's like internet-culture-pseudo-hacker lingo.

Living and learning! Thanks for the info. It's coffee time here too. Enjoy.
ID: 2044533 · Report as offensive     Reply Quote
Previous · 1 . . . 27 · 28 · 29 · 30 · 31 · 32 · 33 . . . 108 · Next

Message boards : Number crunching : The Server Issues / Outages Thread - Panic Mode On! (119)


 
©2020 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.