Spring Cleaning (Jun 19 2013)

Message boards : Technical News : Spring Cleaning (Jun 19 2013)
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile Matt Lebofsky
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 1 Mar 99
Posts: 1444
Credit: 957,058
RAC: 0
United States
Message 1382796 - Posted: 19 Jun 2013, 19:12:40 UTC

Here's a (long overdue) status report. I've was out of the lab for all of May. During that time Eric, Jeff, and company got V7 out the door. Outside of that, operations were pretty much normal (weekly outages, a couple server hiccups, and slow but steady scientific analysis and software development). V7 gives us, among other things, a new ET signature to look for: autocorrelations. Eric described this and more in his thread here.

I think it's safe to say the move to the colocation facility is looking to be a success. The extra bandwidth alone is a huge improvement (yes?). Having less mental clutter involving system admin is another gain. Thus far we had only one minor crisis that required us to actually go there and fix things in person. That's not the worst problem, as the facility is easy enough to get to and near a good cafe. I still spend a lot of time doing admin, but definitely less than before, and with the warm fuzzy feeling that if there are power or heating issues somebody else will deal with it.

Server-news-wise, we did acquire another donated box - a 3U monster that actually contains four motherboards, each with 2 hexa-core Xeon CPUs and 72GB of memory, and 3 SATA drives. Despite being in one box, they are four distinct machines: muarae1, muarae2, muarae3, and muarae4. You may have noticed (or not) that muarae1 has already been employed to replace thinman as the main SETI@home web site server. We hope to retire thinman soon, if only because it is physically too large by today's standards (3U, 4 cpus, 28GB) and thus costing us too much money (as the colocation facility charges us by the rack space unit). It is also too deep for its current rack by a couple inches and hindering air flow. The plans for the remaining muaraes are still being debated. Eric is already using another as a GALFA compute server. By the way, as I write this thinman is still around and getting web hits from the few people/robots out there that have IP addresses hard wired or really stubborn DNS caches.

The current big behind-the-scenes push involves cleaning up the database to get all the different data "epochs" (classic, enchanced, multibeam, non-blanked, hardware-blanked, software-blanked, V7, etc.) into one unified format, while (finally) closing in on a giant programming library to reduce and analyze data from any time or source. Part of the motivation is the acquisition of data from the Green Bank Telescope, and folding that data into our current suite of tools. In particular, my current task is porting the drifiting RFI detection algorithm (which I last touched 14 years ago!) from the hard-wired SERENDIP IV version to a generalized version.

Oh yeah there is a current dearth of work as I am about to post this message. We are on it. We burned through the last batch much quicker than expected.

- Matt

-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude
ID: 1382796 · Report as offensive
Profile morpheus
Avatar

Send message
Joined: 5 Jun 99
Posts: 71
Credit: 52,480,762
RAC: 33
Germany
Message 1382799 - Posted: 19 Jun 2013, 19:33:36 UTC - in response to Message 1382796.  

Thank you for the heads up.

Btw:
Hosts (http://setiathome.berkeley.edu/sah_status.html)
[...]
muarae1: Intel Server (2 x hexa-core 3.07Hz Xeon, 76 GB RAM)
should be
muarae1: Intel Server (2 x hexa-core 3.07GHz Xeon, 76 GB RAM)

;)
.:morpheus:.
ID: 1382799 · Report as offensive
Profile Matt Lebofsky
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 1 Mar 99
Posts: 1444
Credit: 957,058
RAC: 0
United States
Message 1382801 - Posted: 19 Jun 2013, 19:35:39 UTC - in response to Message 1382799.  


Hosts (http://setiathome.berkeley.edu/sah_status.html)
[...]
muarae1: Intel Server (2 x hexa-core 3.07Hz Xeon, 76 GB RAM)
should be
muarae1: Intel Server (2 x hexa-core 3.07GHz Xeon, 76 GB RAM)


Ha! Yes. Seemed a little less impressive than it was.

- Matt
-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude
ID: 1382801 · Report as offensive
Claggy
Volunteer tester

Send message
Joined: 5 Jul 99
Posts: 4654
Credit: 47,537,079
RAC: 4
United Kingdom
Message 1382802 - Posted: 19 Jun 2013, 19:37:13 UTC - in response to Message 1382801.  

Welcome back Matt, and thanks for the update.

Claggy
ID: 1382802 · Report as offensive
Ulrich Metzner
Volunteer tester
Avatar

Send message
Joined: 3 Jul 02
Posts: 1256
Credit: 13,565,513
RAC: 13
Germany
Message 1382808 - Posted: 19 Jun 2013, 19:57:38 UTC

Thanks for the update, much appreciated!
Aloha, Uli

ID: 1382808 · Report as offensive
Profile mr.mac52
Avatar

Send message
Joined: 18 Mar 03
Posts: 67
Credit: 245,882,461
RAC: 0
United States
Message 1382845 - Posted: 19 Jun 2013, 21:51:45 UTC

Understanding the backstory about SETI@home helps make sense of what we volunteers experience on a daily basis.

Thanks for keeping this line of communications open and functional Matt!

John
ID: 1382845 · Report as offensive
Rolf

Send message
Joined: 16 Jun 09
Posts: 114
Credit: 7,817,146
RAC: 0
Switzerland
Message 1382850 - Posted: 19 Jun 2013, 22:09:37 UTC

Thanks Matt for the News!
Just one small question: What about the Server named "khan"?
ID: 1382850 · Report as offensive
Tom*

Send message
Joined: 12 Aug 11
Posts: 127
Credit: 20,769,223
RAC: 9
United States
Message 1382860 - Posted: 19 Jun 2013, 22:37:30 UTC

What about the Server named "khan"?


An appropriate name for a server thats been asleep for 300 cycles and now

has woken up causing mischief in the universe of SETI.

"The Wrath of Khan"!!
ID: 1382860 · Report as offensive
Profile Matt Lebofsky
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 1 Mar 99
Posts: 1444
Credit: 957,058
RAC: 0
United States
Message 1382867 - Posted: 19 Jun 2013, 23:13:41 UTC

Oh yeah, khan... I mean... KHAN!!!!

That's an older server that the Berkeley Wireless Research Center was getting rid of so we took it. It was the first thing we installed at the colo facility (to test the waters) and apparently we are now employing it as a NTPCkr/RFI test server. I added it to the server status page machine list.

- Matt
-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude
ID: 1382867 · Report as offensive
Thomas
Volunteer tester

Send message
Joined: 9 Dec 11
Posts: 1499
Credit: 1,345,576
RAC: 0
France
Message 1383053 - Posted: 20 Jun 2013, 15:11:53 UTC - in response to Message 1382796.  

Thanks so much for the heads-up Matt !
Very appreciated ! :)

The extra bandwidth alone is a huge improvement (yes?)

YEEEEESSSSSSSS !!!!
Good luck for cleaning up the database...
ID: 1383053 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1383071 - Posted: 20 Jun 2013, 15:32:47 UTC
Last modified: 20 Jun 2013, 15:33:17 UTC

Thank you for the updates, Matt.
So good to have you in the lab and hear from you again.

Yes, I had noted muraea1 doing the web page serving on the status page a day or so ago. Turns out the muraea had more to offer than meets the eye. What a wonderful server donation!!!

Best wishes on your continued efforts for the project.

Meow.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1383071 · Report as offensive
JohnDK Crowdfunding Project Donor*Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 28 May 00
Posts: 1222
Credit: 451,243,443
RAC: 1,127
Denmark
Message 1383085 - Posted: 20 Jun 2013, 16:04:35 UTC - in response to Message 1383053.  

The extra bandwidth alone is a huge improvement (yes?)

YEEEEESSSSSSSS !!!!

+1000
ID: 1383085 · Report as offensive
Profile Michael Hoffmann
Volunteer tester

Send message
Joined: 4 Jun 08
Posts: 26
Credit: 3,284,993
RAC: 0
Germany
Message 1383430 - Posted: 21 Jun 2013, 17:45:02 UTC

Just out of curiosity: What is the reason for the reprocessing of data from 2008/2009? Or is this unprocessed data from this epoche which someone found in the corner of a dark cellar room?
Om mani padme hum.
ID: 1383430 · Report as offensive
Profile Donald L. Johnson
Avatar

Send message
Joined: 5 Aug 02
Posts: 8240
Credit: 14,654,533
RAC: 20
United States
Message 1383492 - Posted: 21 Jun 2013, 21:44:56 UTC - in response to Message 1383430.  

Just out of curiosity: What is the reason for the reprocessing of data from 2008/2009? Or is this unprocessed data from this epoche which someone found in the corner of a dark cellar room?

I suspect that the answer might be here.
Donald
Infernal Optimist / Submariner, retired
ID: 1383492 · Report as offensive
Profile Michael Hoffmann
Volunteer tester

Send message
Joined: 4 Jun 08
Posts: 26
Credit: 3,284,993
RAC: 0
Germany
Message 1384001 - Posted: 23 Jun 2013, 19:03:28 UTC - in response to Message 1383492.  

Just out of curiosity: What is the reason for the reprocessing of data from 2008/2009? Or is this unprocessed data from this epoche which someone found in the corner of a dark cellar room?

I suspect that the answer might be here.


Thank you, Donald! That was exactly what I was looking for.
Om mani padme hum.
ID: 1384001 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1384382 - Posted: 24 Jun 2013, 22:16:47 UTC - in response to Message 1384373.  

whats about the Near Time Persistency Checker ?

http://setiathome.berkeley.edu/forum_thread.php?id=72057&postid=1382867

Cheers.
ID: 1384382 · Report as offensive
Profile KWSN THE Holy Hand Grenade!
Volunteer tester
Avatar

Send message
Joined: 20 Dec 05
Posts: 3187
Credit: 57,163,290
RAC: 0
United States
Message 1384916 - Posted: 26 Jun 2013, 19:22:44 UTC
Last modified: 26 Jun 2013, 19:24:02 UTC

Since the SETI team seems to have more "time on their hands" since the move to the co-location facility, I mention again that:

The "Multi-Beam Data Recorder Status" hasn't updated in 2 years. (and counting)

and

The "SETI@home Data Distribution History" hasn't updated since August 2008.

Just wanted to draw the staff's attention, in case they've forgotten: I know these items are low-priority — but they shouldn't be forgotten!
.

Hello, from Albany, CA!...
ID: 1384916 · Report as offensive
Profile Matt Lebofsky
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 1 Mar 99
Posts: 1444
Credit: 957,058
RAC: 0
United States
Message 1384963 - Posted: 26 Jun 2013, 23:06:02 UTC

Ah, yes - those web pages are a little embarrassing... But FYI...

The multi-beam data recorder status broke after a server crash at AO, followed by an increase in stricter network security policies at both AO and UCB. Fair enough, but to get this working again requires a whole new rewrite of code and poking holes in the firewall which we aren't exactly comfortable with. So this just sits there, and I admit it's unsightly.

The data distribution history.. I forgot about why this was broken and recompiled the code and tried running it again, and then remembered: around 2008 we got some corrupted data in the database that breaks several scripts/programs trying to access these corrupt areas including the script that generates this page. It's the kind of corruption that isn't breaking anything important, and would get fixed after we tackle some global database housecleaning projects that are (obviously) still ongoing. Also unsightly...

Thanks for the reminders, though.

- Matt
-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude
ID: 1384963 · Report as offensive
Cameron
Avatar

Send message
Joined: 27 Nov 02
Posts: 110
Credit: 5,082,471
RAC: 17
Australia
Message 1386440 - Posted: 1 Jul 2013, 13:57:24 UTC

Thanks Matt for your usual news behind the scenes.

glad to see the 26 Jun 2013 timestamp on the Data Distribution History page.


hopefully fixing the corrupted data won't take too long when the team can finally get to it.
ID: 1386440 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1393038 - Posted: 20 Jul 2013, 22:38:13 UTC

Has there been any progress with finding tweaks/optimizations to the database itself to get more I/O out of it? If I remember, that is the main reason for the current WU limits was to keep it from growing too large between the weekly clean-ups.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1393038 · Report as offensive
1 · 2 · Next

Message boards : Technical News : Spring Cleaning (Jun 19 2013)


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.