/dev/null (Mar 16 2015)

Message boards : Technical News : /dev/null (Mar 16 2015)
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile Matt Lebofsky
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 1 Mar 99
Posts: 1444
Credit: 957,058
RAC: 0
United States
Message 1653687 - Posted: 16 Mar 2015, 21:55:46 UTC

Happy Monday!

So yeah things were looking good Friday afternoon when I got marvin (and the Astropulse database) working enough to generate new work and insert new results, and thus bring Astropulse on line. A couple stupid NFS hangs at the end of the day rained on my parade, but things were still working once stuff rebooted.

But turns out pretty much all the data in our queue was already split by Astropulse so only a few thousand workunits were generated. We broke the dam, but there was not much on the other side. There will be actual AP work coming on line soon (the raw data has to go through all the software blanking processing hence the delay).

Meanwhile over the weekend our main science database server on paddym crashed due to a bungled index in the result table. I think this was due to a spurious disk error, but informix was in a sad state. I got it kinda back up and running Sunday night, but have been spending all day repairing/checking that index (and the whole database) so we haven't been able to assimilate any results for a while. Once again: soon.

- Matt
-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude
ID: 1653687 · Report as offensive
Profile Gary Charpentier Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 25 Dec 00
Posts: 30920
Credit: 53,134,872
RAC: 32
United States
Message 1653693 - Posted: 16 Mar 2015, 22:17:09 UTC

/dev/random

Thanks for the information, and good luck on the corruption.
ID: 1653693 · Report as offensive
Bill Butler
Avatar

Send message
Joined: 26 Aug 03
Posts: 101
Credit: 4,270,697
RAC: 0
United States
Message 1653697 - Posted: 16 Mar 2015, 22:33:01 UTC

Thanks for all your effort and work Matt.
We all appreciate it.
"It is often darkest just before it turns completely black."
ID: 1653697 · Report as offensive
Claggy
Volunteer tester

Send message
Joined: 5 Jul 99
Posts: 4654
Credit: 47,537,079
RAC: 4
United Kingdom
Message 1653704 - Posted: 16 Mar 2015, 23:02:13 UTC - in response to Message 1653687.  

Thanks for the update, and all the efforts,

Any progress in getting Seti Bet's issues fixed?

Claggy
ID: 1653704 · Report as offensive
Profile betreger Project Donor
Avatar

Send message
Joined: 29 Jun 99
Posts: 11408
Credit: 29,581,041
RAC: 66
United States
Message 1653732 - Posted: 17 Mar 2015, 1:47:24 UTC
Last modified: 17 Mar 2015, 1:57:53 UTC

Matt I do look forward to the repairs and thanx for your efforts.
A question I ask is what happened to all those channels that were split when AP was down, I would think they would have lots of APs waiting?
ID: 1653732 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51477
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1653784 - Posted: 17 Mar 2015, 7:50:31 UTC

Thank you very much, Matt, for your continued updates on the gremlins coming and going.
Very refreshing to have that information shared with us.

Meow!
"Time is simply the mechanism that keeps everything from happening all at once."

ID: 1653784 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1653862 - Posted: 17 Mar 2015, 14:32:03 UTC - in response to Message 1653687.  

I think this was due to a spurious disk error, but informix was in a sad state. I got it kinda back up and running Sunday night, but have been spending all day repairing/checking that index (and the whole database) so we haven't been able to assimilate any results for a while. Once again: soon.

- Matt

RAID5 used?
ID: 1653862 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1653869 - Posted: 17 Mar 2015, 15:00:23 UTC - in response to Message 1653862.  

I think this was due to a spurious disk error, but informix was in a sad state. I got it kinda back up and running Sunday night, but have been spending all day repairing/checking that index (and the whole database) so we haven't been able to assimilate any results for a while. Once again: soon.

- Matt

RAID5 used?


Don't tell me you belong to BAARF? :-P
ID: 1653869 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1654144 - Posted: 18 Mar 2015, 15:03:23 UTC - in response to Message 1653869.  

I think this was due to a spurious disk error, but informix was in a sad state. I got it kinda back up and running Sunday night, but have been spending all day repairing/checking that index (and the whole database) so we haven't been able to assimilate any results for a while. Once again: soon.

- Matt

RAID5 used?


Don't tell me you belong to BAARF? :-P

LoL, no. But mention of BAARF lead to answer to my unspoken question: RAID5 (even being used) doesn't check parity on read. Hence "spurious disk error" quite possible. I had better impression about error-correction abilities of such arrays before.
http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt
ID: 1654144 · Report as offensive
Cheopis

Send message
Joined: 17 Sep 00
Posts: 156
Credit: 18,451,329
RAC: 0
United States
Message 1655466 - Posted: 21 Mar 2015, 18:10:33 UTC
Last modified: 21 Mar 2015, 18:11:47 UTC

Matt,

I want to thank you and the rest of the team for keeping us in the know about what's going on.

At the same time, I cannot remember seeing any responses from the team on questions that have been asked about whether or not it's time to start looking at new database software.

My first thought on the matter is that it seems as if whatever Google uses for it's search databases must be robust enough to handle SETI data. Google also tends to get involved in some science as well. Perhaps one of the folks that read these forums might be in a position to float a question to Google representatives to see if there might be a way to get help or at least advice?

I suspect that Google heavily utilizes solid state storage, but you've been indicating that the problem seems to be software, not hardware.

If Google is using an in-house database (which seems highly likely) then they certainly have some extremely talented database people available who might be able to help even without providing proprietary database software.

So, does anyone here work at Google, or have a solid connection there that can be pinged to see if help is an option? It seems to me that Google is forward-looking enough that they might be happy to help.
ID: 1655466 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3720
Credit: 9,385,827
RAC: 0
Bulgaria
Message 1655983 - Posted: 23 Mar 2015, 10:18:14 UTC - in response to Message 1655466.  

UC Berkeley have their own "Database Services" (lists several DB)
http://ist.berkeley.edu/services/catalog/database

I think they were already asked by SETI@home staff for better solution (PostgreSQL ?) but "returned empty" (lack of some needed features?)

PostgreSQL seems to have some major users:
http://en.wikipedia.org/wiki/PostgreSQL#Prominent_users

Comparison of Limits:
http://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems#Limits
 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1655983 · Report as offensive
Cheopis

Send message
Joined: 17 Sep 00
Posts: 156
Credit: 18,451,329
RAC: 0
United States
Message 1656147 - Posted: 23 Mar 2015, 21:52:36 UTC - in response to Message 1655983.  

Aye, Bilbg, but I figure the SETI@home team have already at least examined most of the 'easy' solutions and obvious potential problems, as well as approaching local experts within the university system.

If anyone's got a database program that can handle SETI@home's databases, it's probably either an insurance company, the NSA, or Google. Of those three, I figure the ones most likely to be interested in helping is Google :)
ID: 1656147 · Report as offensive
David S
Volunteer tester
Avatar

Send message
Joined: 4 Oct 99
Posts: 18352
Credit: 27,761,924
RAC: 12
United States
Message 1656197 - Posted: 24 Mar 2015, 1:55:08 UTC

The problem is not merely in finding a database that can handle S@H, it's also in finding one they can afford. Considering this project's budget, that basically means two things: it's free, and it doesn't require multiple full time staff to do nothing else but maintain it. If they had Google's budget, it wouldn't be a problem.
David
Sitting on my butt while others boldly go,
Waiting for a message from a small furry creature from Alpha Centauri.

ID: 1656197 · Report as offensive
Profile betreger Project Donor
Avatar

Send message
Joined: 29 Jun 99
Posts: 11408
Credit: 29,581,041
RAC: 66
United States
Message 1656201 - Posted: 24 Mar 2015, 2:07:11 UTC - in response to Message 1656197.  

Yes + a lot.
ID: 1656201 · Report as offensive
Cheopis

Send message
Joined: 17 Sep 00
Posts: 156
Credit: 18,451,329
RAC: 0
United States
Message 1656249 - Posted: 24 Mar 2015, 4:37:48 UTC - in response to Message 1656197.  
Last modified: 24 Mar 2015, 4:39:05 UTC

The problem is not merely in finding a database that can handle S@H, it's also in finding one they can afford. Considering this project's budget, that basically means two things: it's free, and it doesn't require multiple full time staff to do nothing else but maintain it. If they had Google's budget, it wouldn't be a problem.


Aye, but I don't know what Google uses, or how much they would charge to license it if it's in-house code. (or if they would allow a license at all)

It's conceivable that it's ridiculously well-documented and would be moderately easy to administer. I know, I know, the very idea of well-documented, easy-to-use code in a large corporation seems alien, but it is possible. Google searches just seem to be far too fast to be based on spaghetti code. (Go ahead, laugh at me now.)

I still think it's worth thinking about. If anyone has a nearly off-the-shelf solution that can handle the complexity of the SETI@Home database, it's Google.

*shrug* It's an idea. Not like I can command anyone to do anything here. SETI@Home has a little clout in the intellectual / computing world simply based on it's history. Google might be happy to help, in order to associate itself with the project in a meaningful way.
ID: 1656249 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1656273 - Posted: 24 Mar 2015, 5:51:41 UTC - in response to Message 1656249.  

Companies like Google also have one thing on their side ... if they screw up a search it's not really a big deal. The next time their web crawlers update their database the search will be OK again.

The Seti team has, hmmm 15 years of data that can't be lost. No one will convince me there was NOT many swear words uttered (most likely screamed) when things went wrong with the database, or that the team is not still sweating bullets that their data can be reliably recovered.

(Now this is an assumption on my part) ALL because they have a small budget and moving files around because their server storage wasn't big enough.

The seti yearly budget would also probably not even be comparable to Google's hourly budget for power only, for 1 data center.

Rack space 'rental' in a data center is not cheap, assumption again, Seti probably resides in about 3 racks, maybe 5 with support computers.
ID: 1656273 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3720
Credit: 9,385,827
RAC: 0
Bulgaria
Message 1656308 - Posted: 24 Mar 2015, 7:50:43 UTC - in response to Message 1656197.  

The problem is not merely in finding a database that can handle S@H, it's also in finding one they can afford.

Yes, and they can afford PostgreSQL as it is free
http://www.postgresql.org/
http://www.postgresql.org/download/
http://www.enterprisedb.com/products-services-training/pgdownload

Out of curiosity I downloaded the Windows 32 bit installer and it is only 56 MB

According to Wikipedia it is on par with Oracle by speed:
http://en.wikipedia.org/wiki/PostgreSQL#Benchmarks_and_performance

E.g. "In April 2012, Robert Haas of EnterpriseDB demonstrated PostgreSQL 9.2's linear CPU scalability using a server with 64 cores"

I remember some post that Informix work slow and at the same time do not load CPU and HDD ...
Found it:
"Informix never ceases to astonish me with the way it does things. The table rebuild is neither maxing out CPUs or I/O, primarily because it doesn't seem to be running the table creation in parallel. It's working on one table fragment at a time"
http://setiathome.berkeley.edu/forum_thread.php?id=76106&postid=1600681#1600681

For me PostgreSQL seems/looks better than MySQL (and maybe better than the old version of Informix they use?)

If they have "effectively infinite amount of disk space (38TB usable)" and "copy of the whole database as it lives on disk (about 13TB)"
and have some time (of course) they may do some 'play' with PostgreSQL (e.g. examine if there are tools/add-ons/interface to convert from Informix to PostgreSQL)
 
 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1656308 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1656521 - Posted: 24 Mar 2015, 23:53:17 UTC - in response to Message 1656308.  

My first quick look at PostgreSQL is 400 MB is considered a big dB, which should be plenty for now, but is that for 1 dB or for the 20 helper dB's required?

Last specs I seen for Infomix was Petabyes for size limits.

And I have no clue at which one would perform better under loads they put on it.
ID: 1656521 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3720
Credit: 9,385,827
RAC: 0
Bulgaria
Message 1656532 - Posted: 25 Mar 2015, 0:29:15 UTC - in response to Message 1656521.  

My first quick look at PostgreSQL is 400 MB is considered a big dB ...

Where did you see that??
 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1656532 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1656627 - Posted: 25 Mar 2015, 8:49:10 UTC - in response to Message 1656532.  

My first quick look at PostgreSQL is 400 MB is considered a big dB ...

Where did you see that??



I would have to go hunting for it. It was on an app download page.

I was along the lines of "We are successfully running a 400Mb dB with no problem" I'm not sure what their limit is, I didn't look.

But when a app brags/advertising about 400Mb, it's probably not far off what their limitation is.

As I said it was just a quick look at what they offered.
ID: 1656627 · Report as offensive
1 · 2 · Next

Message boards : Technical News : /dev/null (Mar 16 2015)


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.