Message boards :
Technical News :
/dev/null (Mar 16 2015)
Message board moderation
Author | Message |
---|---|
Matt Lebofsky Send message Joined: 1 Mar 99 Posts: 1444 Credit: 957,058 RAC: 0 |
Happy Monday! So yeah things were looking good Friday afternoon when I got marvin (and the Astropulse database) working enough to generate new work and insert new results, and thus bring Astropulse on line. A couple stupid NFS hangs at the end of the day rained on my parade, but things were still working once stuff rebooted. But turns out pretty much all the data in our queue was already split by Astropulse so only a few thousand workunits were generated. We broke the dam, but there was not much on the other side. There will be actual AP work coming on line soon (the raw data has to go through all the software blanking processing hence the delay). Meanwhile over the weekend our main science database server on paddym crashed due to a bungled index in the result table. I think this was due to a spurious disk error, but informix was in a sad state. I got it kinda back up and running Sunday night, but have been spending all day repairing/checking that index (and the whole database) so we haven't been able to assimilate any results for a while. Once again: soon. - Matt -- BOINC/SETI@home network/web/science/development person -- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude |
Gary Charpentier Send message Joined: 25 Dec 00 Posts: 31013 Credit: 53,134,872 RAC: 32 |
/dev/random Thanks for the information, and good luck on the corruption. |
Bill Butler Send message Joined: 26 Aug 03 Posts: 101 Credit: 4,270,697 RAC: 0 |
Thanks for all your effort and work Matt. We all appreciate it. "It is often darkest just before it turns completely black." |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
Thanks for the update, and all the efforts, Any progress in getting Seti Bet's issues fixed? Claggy |
betreger Send message Joined: 29 Jun 99 Posts: 11416 Credit: 29,581,041 RAC: 66 |
Matt I do look forward to the repairs and thanx for your efforts. A question I ask is what happened to all those channels that were split when AP was down, I would think they would have lots of APs waiting? |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
Thank you very much, Matt, for your continued updates on the gremlins coming and going. Very refreshing to have that information shared with us. Meow! "Time is simply the mechanism that keeps everything from happening all at once." |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
I think this was due to a spurious disk error, but informix was in a sad state. I got it kinda back up and running Sunday night, but have been spending all day repairing/checking that index (and the whole database) so we haven't been able to assimilate any results for a while. Once again: soon. RAID5 used? |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
I think this was due to a spurious disk error, but informix was in a sad state. I got it kinda back up and running Sunday night, but have been spending all day repairing/checking that index (and the whole database) so we haven't been able to assimilate any results for a while. Once again: soon. Don't tell me you belong to BAARF? :-P |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
I think this was due to a spurious disk error, but informix was in a sad state. I got it kinda back up and running Sunday night, but have been spending all day repairing/checking that index (and the whole database) so we haven't been able to assimilate any results for a while. Once again: soon. LoL, no. But mention of BAARF lead to answer to my unspoken question: RAID5 (even being used) doesn't check parity on read. Hence "spurious disk error" quite possible. I had better impression about error-correction abilities of such arrays before. http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt |
Cheopis Send message Joined: 17 Sep 00 Posts: 156 Credit: 18,451,329 RAC: 0 |
Matt, I want to thank you and the rest of the team for keeping us in the know about what's going on. At the same time, I cannot remember seeing any responses from the team on questions that have been asked about whether or not it's time to start looking at new database software. My first thought on the matter is that it seems as if whatever Google uses for it's search databases must be robust enough to handle SETI data. Google also tends to get involved in some science as well. Perhaps one of the folks that read these forums might be in a position to float a question to Google representatives to see if there might be a way to get help or at least advice? I suspect that Google heavily utilizes solid state storage, but you've been indicating that the problem seems to be software, not hardware. If Google is using an in-house database (which seems highly likely) then they certainly have some extremely talented database people available who might be able to help even without providing proprietary database software. So, does anyone here work at Google, or have a solid connection there that can be pinged to see if help is an option? It seems to me that Google is forward-looking enough that they might be happy to help. |
BilBg Send message Joined: 27 May 07 Posts: 3720 Credit: 9,385,827 RAC: 0 |
UC Berkeley have their own "Database Services" (lists several DB) http://ist.berkeley.edu/services/catalog/database I think they were already asked by SETI@home staff for better solution (PostgreSQL ?) but "returned empty" (lack of some needed features?) PostgreSQL seems to have some major users: http://en.wikipedia.org/wiki/PostgreSQL#Prominent_users Comparison of Limits: http://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems#Limits  - ALF - "Find out what you don't do well ..... then don't do it!" :)  |
Cheopis Send message Joined: 17 Sep 00 Posts: 156 Credit: 18,451,329 RAC: 0 |
Aye, Bilbg, but I figure the SETI@home team have already at least examined most of the 'easy' solutions and obvious potential problems, as well as approaching local experts within the university system. If anyone's got a database program that can handle SETI@home's databases, it's probably either an insurance company, the NSA, or Google. Of those three, I figure the ones most likely to be interested in helping is Google :) |
David S Send message Joined: 4 Oct 99 Posts: 18352 Credit: 27,761,924 RAC: 12 |
The problem is not merely in finding a database that can handle S@H, it's also in finding one they can afford. Considering this project's budget, that basically means two things: it's free, and it doesn't require multiple full time staff to do nothing else but maintain it. If they had Google's budget, it wouldn't be a problem. David Sitting on my butt while others boldly go, Waiting for a message from a small furry creature from Alpha Centauri. |
betreger Send message Joined: 29 Jun 99 Posts: 11416 Credit: 29,581,041 RAC: 66 |
Yes + a lot. |
Cheopis Send message Joined: 17 Sep 00 Posts: 156 Credit: 18,451,329 RAC: 0 |
The problem is not merely in finding a database that can handle S@H, it's also in finding one they can afford. Considering this project's budget, that basically means two things: it's free, and it doesn't require multiple full time staff to do nothing else but maintain it. If they had Google's budget, it wouldn't be a problem. Aye, but I don't know what Google uses, or how much they would charge to license it if it's in-house code. (or if they would allow a license at all) It's conceivable that it's ridiculously well-documented and would be moderately easy to administer. I know, I know, the very idea of well-documented, easy-to-use code in a large corporation seems alien, but it is possible. Google searches just seem to be far too fast to be based on spaghetti code. (Go ahead, laugh at me now.) I still think it's worth thinking about. If anyone has a nearly off-the-shelf solution that can handle the complexity of the SETI@Home database, it's Google. *shrug* It's an idea. Not like I can command anyone to do anything here. SETI@Home has a little clout in the intellectual / computing world simply based on it's history. Google might be happy to help, in order to associate itself with the project in a meaningful way. |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
Companies like Google also have one thing on their side ... if they screw up a search it's not really a big deal. The next time their web crawlers update their database the search will be OK again. The Seti team has, hmmm 15 years of data that can't be lost. No one will convince me there was NOT many swear words uttered (most likely screamed) when things went wrong with the database, or that the team is not still sweating bullets that their data can be reliably recovered. (Now this is an assumption on my part) ALL because they have a small budget and moving files around because their server storage wasn't big enough. The seti yearly budget would also probably not even be comparable to Google's hourly budget for power only, for 1 data center. Rack space 'rental' in a data center is not cheap, assumption again, Seti probably resides in about 3 racks, maybe 5 with support computers. |
BilBg Send message Joined: 27 May 07 Posts: 3720 Credit: 9,385,827 RAC: 0 |
The problem is not merely in finding a database that can handle S@H, it's also in finding one they can afford. Yes, and they can afford PostgreSQL as it is free http://www.postgresql.org/ http://www.postgresql.org/download/ http://www.enterprisedb.com/products-services-training/pgdownload Out of curiosity I downloaded the Windows 32 bit installer and it is only 56 MB According to Wikipedia it is on par with Oracle by speed: http://en.wikipedia.org/wiki/PostgreSQL#Benchmarks_and_performance E.g. "In April 2012, Robert Haas of EnterpriseDB demonstrated PostgreSQL 9.2's linear CPU scalability using a server with 64 cores" I remember some post that Informix work slow and at the same time do not load CPU and HDD ... Found it: "Informix never ceases to astonish me with the way it does things. The table rebuild is neither maxing out CPUs or I/O, primarily because it doesn't seem to be running the table creation in parallel. It's working on one table fragment at a time" http://setiathome.berkeley.edu/forum_thread.php?id=76106&postid=1600681#1600681 For me PostgreSQL seems/looks better than MySQL (and maybe better than the old version of Informix they use?) If they have "effectively infinite amount of disk space (38TB usable)" and "copy of the whole database as it lives on disk (about 13TB)" and have some time (of course) they may do some 'play' with PostgreSQL (e.g. examine if there are tools/add-ons/interface to convert from Informix to PostgreSQL) Â Â - ALF - "Find out what you don't do well ..... then don't do it!" :) Â |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
My first quick look at PostgreSQL is 400 MB is considered a big dB, which should be plenty for now, but is that for 1 dB or for the 20 helper dB's required? Last specs I seen for Infomix was Petabyes for size limits. And I have no clue at which one would perform better under loads they put on it. |
BilBg Send message Joined: 27 May 07 Posts: 3720 Credit: 9,385,827 RAC: 0 |
My first quick look at PostgreSQL is 400 MB is considered a big dB ... Where did you see that?? Â - ALF - "Find out what you don't do well ..... then don't do it!" :) Â |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
My first quick look at PostgreSQL is 400 MB is considered a big dB ... I would have to go hunting for it. It was on an app download page. I was along the lines of "We are successfully running a 400Mb dB with no problem" I'm not sure what their limit is, I didn't look. But when a app brags/advertising about 400Mb, it's probably not far off what their limitation is. As I said it was just a quick look at what they offered. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.