Message boards :
Technical News :
Spring Cleaning (Jun 19 2013)
Message board moderation
Author | Message |
---|---|
Matt Lebofsky Send message Joined: 1 Mar 99 Posts: 1444 Credit: 957,058 RAC: 0 |
Here's a (long overdue) status report. I've was out of the lab for all of May. During that time Eric, Jeff, and company got V7 out the door. Outside of that, operations were pretty much normal (weekly outages, a couple server hiccups, and slow but steady scientific analysis and software development). V7 gives us, among other things, a new ET signature to look for: autocorrelations. Eric described this and more in his thread here. I think it's safe to say the move to the colocation facility is looking to be a success. The extra bandwidth alone is a huge improvement (yes?). Having less mental clutter involving system admin is another gain. Thus far we had only one minor crisis that required us to actually go there and fix things in person. That's not the worst problem, as the facility is easy enough to get to and near a good cafe. I still spend a lot of time doing admin, but definitely less than before, and with the warm fuzzy feeling that if there are power or heating issues somebody else will deal with it. Server-news-wise, we did acquire another donated box - a 3U monster that actually contains four motherboards, each with 2 hexa-core Xeon CPUs and 72GB of memory, and 3 SATA drives. Despite being in one box, they are four distinct machines: muarae1, muarae2, muarae3, and muarae4. You may have noticed (or not) that muarae1 has already been employed to replace thinman as the main SETI@home web site server. We hope to retire thinman soon, if only because it is physically too large by today's standards (3U, 4 cpus, 28GB) and thus costing us too much money (as the colocation facility charges us by the rack space unit). It is also too deep for its current rack by a couple inches and hindering air flow. The plans for the remaining muaraes are still being debated. Eric is already using another as a GALFA compute server. By the way, as I write this thinman is still around and getting web hits from the few people/robots out there that have IP addresses hard wired or really stubborn DNS caches. The current big behind-the-scenes push involves cleaning up the database to get all the different data "epochs" (classic, enchanced, multibeam, non-blanked, hardware-blanked, software-blanked, V7, etc.) into one unified format, while (finally) closing in on a giant programming library to reduce and analyze data from any time or source. Part of the motivation is the acquisition of data from the Green Bank Telescope, and folding that data into our current suite of tools. In particular, my current task is porting the drifiting RFI detection algorithm (which I last touched 14 years ago!) from the hard-wired SERENDIP IV version to a generalized version. Oh yeah there is a current dearth of work as I am about to post this message. We are on it. We burned through the last batch much quicker than expected. - Matt -- BOINC/SETI@home network/web/science/development person -- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude |
morpheus Send message Joined: 5 Jun 99 Posts: 71 Credit: 52,480,762 RAC: 33 |
Thank you for the heads up. Btw: Hosts (http://setiathome.berkeley.edu/sah_status.html) [...] muarae1: Intel Server (2 x hexa-core 3.07Hz Xeon, 76 GB RAM) should be muarae1: Intel Server (2 x hexa-core 3.07GHz Xeon, 76 GB RAM) ;) .:morpheus:. |
Matt Lebofsky Send message Joined: 1 Mar 99 Posts: 1444 Credit: 957,058 RAC: 0 |
Ha! Yes. Seemed a little less impressive than it was. - Matt -- BOINC/SETI@home network/web/science/development person -- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
Welcome back Matt, and thanks for the update. Claggy |
Ulrich Metzner Send message Joined: 3 Jul 02 Posts: 1256 Credit: 13,565,513 RAC: 13 |
Thanks for the update, much appreciated! Aloha, Uli |
mr.mac52 Send message Joined: 18 Mar 03 Posts: 67 Credit: 245,882,461 RAC: 0 |
Understanding the backstory about SETI@home helps make sense of what we volunteers experience on a daily basis. Thanks for keeping this line of communications open and functional Matt! John |
Rolf Send message Joined: 16 Jun 09 Posts: 114 Credit: 7,817,146 RAC: 0 |
Thanks Matt for the News! Just one small question: What about the Server named "khan"? |
Tom* Send message Joined: 12 Aug 11 Posts: 127 Credit: 20,769,223 RAC: 9 |
What about the Server named "khan"? An appropriate name for a server thats been asleep for 300 cycles and now has woken up causing mischief in the universe of SETI. "The Wrath of Khan"!! |
Matt Lebofsky Send message Joined: 1 Mar 99 Posts: 1444 Credit: 957,058 RAC: 0 |
Oh yeah, khan... I mean... KHAN!!!! That's an older server that the Berkeley Wireless Research Center was getting rid of so we took it. It was the first thing we installed at the colo facility (to test the waters) and apparently we are now employing it as a NTPCkr/RFI test server. I added it to the server status page machine list. - Matt -- BOINC/SETI@home network/web/science/development person -- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude |
Thomas Send message Joined: 9 Dec 11 Posts: 1499 Credit: 1,345,576 RAC: 0 |
Thanks so much for the heads-up Matt ! Very appreciated ! :) The extra bandwidth alone is a huge improvement (yes?) YEEEEESSSSSSSS !!!! Good luck for cleaning up the database... |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
Thank you for the updates, Matt. So good to have you in the lab and hear from you again. Yes, I had noted muraea1 doing the web page serving on the status page a day or so ago. Turns out the muraea had more to offer than meets the eye. What a wonderful server donation!!! Best wishes on your continued efforts for the project. Meow. "Time is simply the mechanism that keeps everything from happening all at once." |
JohnDK Send message Joined: 28 May 00 Posts: 1222 Credit: 451,243,443 RAC: 1,127 |
The extra bandwidth alone is a huge improvement (yes?) +1000 |
Michael Hoffmann Send message Joined: 4 Jun 08 Posts: 26 Credit: 3,284,993 RAC: 0 |
Just out of curiosity: What is the reason for the reprocessing of data from 2008/2009? Or is this unprocessed data from this epoche which someone found in the corner of a dark cellar room? Om mani padme hum. |
Donald L. Johnson Send message Joined: 5 Aug 02 Posts: 8240 Credit: 14,654,533 RAC: 20 |
Just out of curiosity: What is the reason for the reprocessing of data from 2008/2009? Or is this unprocessed data from this epoche which someone found in the corner of a dark cellar room? I suspect that the answer might be here. Donald Infernal Optimist / Submariner, retired |
Michael Hoffmann Send message Joined: 4 Jun 08 Posts: 26 Credit: 3,284,993 RAC: 0 |
Just out of curiosity: What is the reason for the reprocessing of data from 2008/2009? Or is this unprocessed data from this epoche which someone found in the corner of a dark cellar room? Thank you, Donald! That was exactly what I was looking for. Om mani padme hum. |
Wiggo Send message Joined: 24 Jan 00 Posts: 36850 Credit: 261,360,520 RAC: 489 |
whats about the Near Time Persistency Checker ? http://setiathome.berkeley.edu/forum_thread.php?id=72057&postid=1382867 Cheers. |
KWSN THE Holy Hand Grenade! Send message Joined: 20 Dec 05 Posts: 3187 Credit: 57,163,290 RAC: 0 |
Since the SETI team seems to have more "time on their hands" since the move to the co-location facility, I mention again that: The "Multi-Beam Data Recorder Status" hasn't updated in 2 years. (and counting) and The "SETI@home Data Distribution History" hasn't updated since August 2008. Just wanted to draw the staff's attention, in case they've forgotten: I know these items are low-priority — but they shouldn't be forgotten! . Hello, from Albany, CA!... |
Matt Lebofsky Send message Joined: 1 Mar 99 Posts: 1444 Credit: 957,058 RAC: 0 |
Ah, yes - those web pages are a little embarrassing... But FYI... The multi-beam data recorder status broke after a server crash at AO, followed by an increase in stricter network security policies at both AO and UCB. Fair enough, but to get this working again requires a whole new rewrite of code and poking holes in the firewall which we aren't exactly comfortable with. So this just sits there, and I admit it's unsightly. The data distribution history.. I forgot about why this was broken and recompiled the code and tried running it again, and then remembered: around 2008 we got some corrupted data in the database that breaks several scripts/programs trying to access these corrupt areas including the script that generates this page. It's the kind of corruption that isn't breaking anything important, and would get fixed after we tackle some global database housecleaning projects that are (obviously) still ongoing. Also unsightly... Thanks for the reminders, though. - Matt -- BOINC/SETI@home network/web/science/development person -- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude |
Cameron Send message Joined: 27 Nov 02 Posts: 110 Credit: 5,082,471 RAC: 17 |
Thanks Matt for your usual news behind the scenes. glad to see the 26 Jun 2013 timestamp on the Data Distribution History page. hopefully fixing the corrupted data won't take too long when the team can finally get to it. |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
Has there been any progress with finding tweaks/optimizations to the database itself to get more I/O out of it? If I remember, that is the main reason for the current WU limits was to keep it from growing too large between the weekly clean-ups. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.