Panic Mode On (98) Server Problems?

Message boards : Number crunching : Panic Mode On (98) Server Problems?
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · 12 . . . 30 · Next

AuthorMessage
Profile Donald L. Johnson
Avatar

Send message
Joined: 5 Aug 02
Posts: 8240
Credit: 14,654,533
RAC: 20
United States
Message 1694070 - Posted: 20 Jun 2015, 20:25:10 UTC - in response to Message 1694009.  

...Anyway, back to science: Several years ago there was someone in the Lab that was, as I understood it, using some of our data to search for pulsars, quasars, blackholes and/or some other anomalies in space. May have been a post-doctoral someone, but it was project germane. Sadly, the exact specifics of it escape me at the moment, but that was the true intended gist of my question.

I believe you are referring to the Hydrogen survey. As I recall it is an attempt to discover some of the missing matter.
I could be wrong, I arose late and still need more coffee.

No, finding pulsars and such like is a secondary result of the Astro-Pulse search.

Eric's Hydrogen Survey is a totally separate project, not, as I understand it, related to Seti@Home.
Donald
Infernal Optimist / Submariner, retired
ID: 1694070 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1694074 - Posted: 20 Jun 2015, 20:40:15 UTC - in response to Message 1694068.  

At the risk of repeating myself, I'll just state my position again.

I think the project and it's staff are doing a rather fine job with the limited resources they have available.
If the project were operating out of deep pockets, I have no doubt that things would be better and advancing more quickly, as some would like to suppose that they should be doing now.

But.............
When Matt says that a proper nitpicker server would cost some 30K to put together, and it is obvious that the project does not have it to spend, what would some have them do??
A fundraiser may be hatched soon to perhaps get a start on that goal, but it would take a lot of very generous hands on deck to get that one done.

So, please try to accept the fact that the project is pretty much doing what they can with what they have available to them.
And again, I think it has been a grand job so far.

Meow!

With the recently established FIRSST (the Foundation for Investing in Research on SETI Science and Technology). That could help solve some of the funding issues. If they can get an endowment.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1694074 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1694077 - Posted: 20 Jun 2015, 20:42:35 UTC - in response to Message 1694074.  
Last modified: 20 Jun 2015, 20:47:35 UTC

At the risk of repeating myself, I'll just state my position again.

I think the project and it's staff are doing a rather fine job with the limited resources they have available.
If the project were operating out of deep pockets, I have no doubt that things would be better and advancing more quickly, as some would like to suppose that they should be doing now.

But.............
When Matt says that a proper nitpicker server would cost some 30K to put together, and it is obvious that the project does not have it to spend, what would some have them do??
A fundraiser may be hatched soon to perhaps get a start on that goal, but it would take a lot of very generous hands on deck to get that one done.

So, please try to accept the fact that the project is pretty much doing what they can with what they have available to them.
And again, I think it has been a grand job so far.

Meow!

With the recently established FIRSST (the Foundation for Investing in Research on SETI Science and Technology). That could help solve some of the funding issues. If they can get an endowment.

That is the first I have heard of it...no pun intended.
If you have any more information or links, they would be of interest.
Possible to start some kind of petition drive to ask them for assistance for the project?
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1694077 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13722
Credit: 208,696,464
RAC: 304
Australia
Message 1694079 - Posted: 20 Jun 2015, 20:54:20 UTC - in response to Message 1693941.  

More data only feeds into a vicious circle, which may have social science values, perhaps?

Pavlov did that study years ago.
Grant
Darwin NT
ID: 1694079 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1694081 - Posted: 20 Jun 2015, 20:55:27 UTC - in response to Message 1694077.  

At the risk of repeating myself, I'll just state my position again.

I think the project and it's staff are doing a rather fine job with the limited resources they have available.
If the project were operating out of deep pockets, I have no doubt that things would be better and advancing more quickly, as some would like to suppose that they should be doing now.

But.............
When Matt says that a proper nitpicker server would cost some 30K to put together, and it is obvious that the project does not have it to spend, what would some have them do??
A fundraiser may be hatched soon to perhaps get a start on that goal, but it would take a lot of very generous hands on deck to get that one done.

So, please try to accept the fact that the project is pretty much doing what they can with what they have available to them.
And again, I think it has been a grand job so far.

Meow!

With the recently established FIRSST (the Foundation for Investing in Research on SETI Science and Technology). That could help solve some of the funding issues. If they can get an endowment.

That is the first I have heard of it...no pun intended.
If you have any more information or links, they would be of interest.
Possible to start some kind of petition drive to ask them for assistance for the project?


When I said recently established it was mentioned just this month. There isn't much about it just yet, but this is the information I currently know about.

https://seti.berkeley.edu/video/firsst
https://www.youtube.com/watch?v=jY5cQTKlYSs
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1694081 · Report as offensive
Rasputin42
Volunteer tester

Send message
Joined: 25 Jul 08
Posts: 412
Credit: 5,834,661
RAC: 0
United States
Message 1694083 - Posted: 20 Jun 2015, 21:01:07 UTC

Assuming nitpicker is set up and running: Are we crunchers in any way going to support it, with our computers or would it have to do it alone?
ID: 1694083 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1694084 - Posted: 20 Jun 2015, 21:03:05 UTC - in response to Message 1694083.  

Assuming nitpicker is set up and running: Are we crunchers in any way going to support it, with our computers or would it have to do it alone?

My understanding is that nitpicker itself would be a stand alone server and process, but we would of course be supporting it by supplying the data that we process and send back to the project every day.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1694084 · Report as offensive
Rasputin42
Volunteer tester

Send message
Joined: 25 Jul 08
Posts: 412
Credit: 5,834,661
RAC: 0
United States
Message 1694085 - Posted: 20 Jun 2015, 21:06:05 UTC

How powerful does it need to be? If 300000 computers running for years to provide the data, how can one single computer possibly handle that?
ID: 1694085 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1694090 - Posted: 20 Jun 2015, 21:17:46 UTC - in response to Message 1694085.  

How powerful does it need to be? If 300000 computers running for years to provide the data, how can one single computer possibly handle that?

Our computers do most compute-intensive part of data reduction.
further analysis will require different "skills". Most important part is excellent IO capabilities, not raw computing power. That's why enormous amount of RAM required and best buses available in all route from CPU to data storage subsystem.
ID: 1694090 · Report as offensive
Rasputin42
Volunteer tester

Send message
Joined: 25 Jul 08
Posts: 412
Credit: 5,834,661
RAC: 0
United States
Message 1694091 - Posted: 20 Jun 2015, 21:21:40 UTC

Thanks for the info, Raistmer!
ID: 1694091 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1694125 - Posted: 21 Jun 2015, 0:04:28 UTC - in response to Message 1694085.  

How powerful does it need to be? If 300000 computers running for years to provide the data, how can one single computer possibly handle that?

Think of it more like census data. You have a large fleet of people going and doing the legwork and going door-to-door and asking simple questions to get the data and they write all their data down in spreadsheets. (that is us crunching the WUs, searching for any information to be had.)

Then the many many thousands of spreadsheets from all the people who did the legwork gets entered into a giant database (science DB).

Then the data gets analyzed so that we can get those pretty pie charts and line/bar graphs that get published to newspapers and so forth (ntpckr).


The hard part is all the legwork to gather the data in the first place. Realistically, one machine can analyze the data, albeit a fairly robust machine. The big problem with DB queries is that they consume an absolutely massive amount of memory. 512GB of RAM probably still isn't enough for it to be as efficient as it could be.

Now start looking around on websites that sell computer/server components. How many boards are there that can do >512GB of RAM? What is the cost of those? Even if you can only find boards that go up to 512GB, now look at how many slots there are, and then look at the price of ECC registered memory modules (because a board that supports that much memory is going to have multiple CPU sockets, so registered memory is required for multiple CPUs) that are sized enough so that you can max the board out at 512GB, and you'll start to understand why $30k is probably an under-estimate.

And I'm not entirely sure this is something that can easily be coded for a cluster of servers. So you start to realize why this is an absolutely daunting task. And it can't be divided up and sent out to us crunchers, because this whole thing relies heavily on access to the entire 10+TB DB on-demand, so it has to be in-house solely because of that, but also because outside access can introduce contamination into the analysis results.



Regarding "what are they doing?" They're doing the best they can with what they've got. As was said, the budget for the project has a total of maybe enough to pay ONE person, and there are.. what, four? people working on this thing nearly every day. Aside from the fact that they are underpaid and mostly operate as volunteers when they could be doing something else that actually pays them money for their time and skills, they spend most of their time trying to fix problems, rather than pushing forward on development for new things: there's no sense in developing new gadgets to attach to something that doesn't work.

So that's what they're doing. They're trying to get the back-end stuff to be stable and reliable, so that new stuff can come along and actually be functional. The DB has been known to have problems for years, and they've been poking at it and trying different things and have undoubtedly been in communication with the software developers for the DBs to ask if they have any suggestions or advice, and the problem is... the DB is so large that even the software developers are scratching their head and shrugging because they are in uncharted territory.

To take a line from James May when he was going for the new record in the Veyron Super Sport.. "I asked the engineer 'we know how long the tires last at 253mph: 19 miles, but how long will they last at 260?' and he said 'ask a Navy Admiral how deep his submarines can really go, and he'll shrug and say 'we don't know, until we try it.''"

In theory, all the whitepapers for the software says it should be just fine because they are still within the absolute maximum limits of the software itself, but the efficiency seems to fall off on a non-linear (but also not exponential/logarithmic) curve the larger it gets. There are no other organizations that have a DB that large on the software that is being used. There ARE much larger DBs in the world... but those are all on custom, in-house software because there was a nearly-limitless budget to develop said software.

So... tl;dr: they're working with the cards they were dealt. Sure, everyone would love to have been dealt a royal flush, but sometimes, you only get dealt a pair of deuces.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1694125 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1694135 - Posted: 21 Jun 2015, 1:26:52 UTC - in response to Message 1694125.  

How powerful does it need to be? If 300000 computers running for years to provide the data, how can one single computer possibly handle that?

Think of it more like census data. You have a large fleet of people going and doing the legwork and going door-to-door and asking simple questions to get the data and they write all their data down in spreadsheets. (that is us crunching the WUs, searching for any information to be had.)

Then the many many thousands of spreadsheets from all the people who did the legwork gets entered into a giant database (science DB).

Then the data gets analyzed so that we can get those pretty pie charts and line/bar graphs that get published to newspapers and so forth (ntpckr).

The hard part is all the legwork to gather the data in the first place. Realistically, one machine can analyze the data, albeit a fairly robust machine. The big problem with DB queries is that they consume an absolutely massive amount of memory. 512GB of RAM probably still isn't enough for it to be as efficient as it could be.

Now start looking around on websites that sell computer/server components. How many boards are there that can do >512GB of RAM? What is the cost of those? Even if you can only find boards that go up to 512GB, now look at how many slots there are, and then look at the price of ECC registered memory modules (because a board that supports that much memory is going to have multiple CPU sockets, so registered memory is required for multiple CPUs) that are sized enough so that you can max the board out at 512GB, and you'll start to understand why $30k is probably an under-estimate.

And I'm not entirely sure this is something that can easily be coded for a cluster of servers. So you start to realize why this is an absolutely daunting task. And it can't be divided up and sent out to us crunchers, because this whole thing relies heavily on access to the entire 10+TB DB on-demand, so it has to be in-house solely because of that, but also because outside access can introduce contamination into the analysis results.

Regarding "what are they doing?" They're doing the best they can with what they've got. As was said, the budget for the project has a total of maybe enough to pay ONE person, and there are.. what, four? people working on this thing nearly every day. Aside from the fact that they are underpaid and mostly operate as volunteers when they could be doing something else that actually pays them money for their time and skills, they spend most of their time trying to fix problems, rather than pushing forward on development for new things: there's no sense in developing new gadgets to attach to something that doesn't work.

So that's what they're doing. They're trying to get the back-end stuff to be stable and reliable, so that new stuff can come along and actually be functional. The DB has been known to have problems for years, and they've been poking at it and trying different things and have undoubtedly been in communication with the software developers for the DBs to ask if they have any suggestions or advice, and the problem is... the DB is so large that even the software developers are scratching their head and shrugging because they are in uncharted territory.

To take a line from James May when he was going for the new record in the Veyron Super Sport.. "I asked the engineer 'we know how long the tires last at 253mph: 19 miles, but how long will they last at 260?' and he said 'ask a Navy Admiral how deep his submarines can really go, and he'll shrug and say 'we don't know, until we try it.''"

In theory, all the whitepapers for the software says it should be just fine because they are still within the absolute maximum limits of the software itself, but the efficiency seems to fall off on a non-linear (but also not exponential/logarithmic) curve the larger it gets. There are no other organizations that have a DB that large on the software that is being used. There ARE much larger DBs in the world... but those are all on custom, in-house software because there was a nearly-limitless budget to develop said software.

So... tl;dr: they're working with the cards they were dealt. Sure, everyone would love to have been dealt a royal flush, but sometimes, you only get dealt a pair of deuces.


When I build out a system like he specifies for the db server I get anywhere from $50k to $130k. Besides the cost of the memory the enterprise SSD is the largest single item. The base server without memory or storage would only be $10-15K.

I imagine they would so need to update their systems to 10Gb interconnects & include a 10Gb switch to make use of it. Not to mention the new db software. Which can easily cost as much as the hardware. Some companies sell their licenses per CPU socket & some per processor core.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1694135 · Report as offensive
qbit
Volunteer tester
Avatar

Send message
Joined: 19 Sep 04
Posts: 630
Credit: 6,868,528
RAC: 0
Austria
Message 1694205 - Posted: 21 Jun 2015, 7:00:26 UTC

It really annoys me that there is so much money wasted for bulls**t every single day almost everywhere in the world, but a project like this, that, when sucessful, could change EVERYTHING, can't get a damn 30 grand for new hardware :-(
ID: 1694205 · Report as offensive
Phil Burden

Send message
Joined: 26 Oct 00
Posts: 264
Credit: 22,303,899
RAC: 0
United Kingdom
Message 1694267 - Posted: 21 Jun 2015, 11:39:27 UTC - in response to Message 1694205.  

It really annoys me that there is so much money wasted for bulls**t every single day almost everywhere in the world, but a project like this, that, when sucessful, could change EVERYTHING, can't get a damn 30 grand for new hardware :-(



Life is full of injustices. Currently on UK satellite TV we have many (and I mean every ad break) begging ads from various charities all attempting to garner funds for their respective causes. Soon (August) the football season starts, and we get prima donnas, some getting £250,000 per WEEK for 90 minutes football, it doesn't make sense.

P.
ID: 1694267 · Report as offensive
Mark Stevenson Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 8 Sep 11
Posts: 1736
Credit: 174,899,165
RAC: 91
United Kingdom
Message 1694269 - Posted: 21 Jun 2015, 11:50:03 UTC - in response to Message 1694267.  

Life is full of injustices. Currently on UK satellite TV we have many (and I mean every ad break) begging ads from various charities all attempting to garner funds for their respective causes. Soon (August) the football season starts, and we get prima donnas, some getting £250,000 per WEEK for 90 minutes football, it doesn't make sense.


Its not just Sky its on freeview and free sat as well , its the "materialistic "world we live in people want everything here and yesterday and don't give a crap about anything but themselves . They don't look to the future or "see " the bigger picture . I'm sure its the same in almost any "developed" country
ID: 1694269 · Report as offensive
Profile JaundicedEye
Avatar

Send message
Joined: 14 Mar 12
Posts: 5375
Credit: 30,870,693
RAC: 1
United States
Message 1694304 - Posted: 21 Jun 2015, 14:04:07 UTC

Every third ad in the US is a personal injury lawyer.
Wanting to sue someone or something because you got a splinter in your backside from a park bench has become a national past time here.

Much rather donate to SETI than a lawyers bank account.

"Sour Grapes make a bitter Whine." <(0)>
ID: 1694304 · Report as offensive
Profile BANZAI56
Volunteer tester

Send message
Joined: 17 May 00
Posts: 139
Credit: 47,299,948
RAC: 2
United States
Message 1694334 - Posted: 21 Jun 2015, 14:43:45 UTC - in response to Message 1694055.  

I think the latest video, posted about an hour ago, may have some of the answers you seek.
Q&A: How Often Does SETI@home Discover a Promising Candidate?
https://youtu.be/C3vdApW2XLQ


Thanks for this link.

Sounds like 2004 was the last time they "turned the crank" on the database.


Leaves one with more questions than answers really...
ID: 1694334 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1694389 - Posted: 21 Jun 2015, 18:33:55 UTC - in response to Message 1694334.  

I think the latest video, posted about an hour ago, may have some of the answers you seek.
Q&A: How Often Does SETI@home Discover a Promising Candidate?
https://youtu.be/C3vdApW2XLQ


Thanks for this link.

Sounds like 2004 was the last time they "turned the crank" on the database.


Leaves one with more questions than answers really...

I'm not sure if "after 5 years we turned the crank" was referring to classic SETI@home or if he meant since switching to BOINC. Under classic SETI@home, when they had their own dish time, there was a reobservation of interesting or notable signals. Which was planned under the original run & took place in 2003 IIRC.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1694389 · Report as offensive
Filipe

Send message
Joined: 12 Aug 00
Posts: 218
Credit: 21,281,677
RAC: 20
Portugal
Message 1694554 - Posted: 22 Jun 2015, 10:35:15 UTC

Any news on the Astropulse database?

Matt had talked about a possible come back in mid-june.
ID: 1694554 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1694568 - Posted: 22 Jun 2015, 12:45:19 UTC - in response to Message 1694554.  

Any news on the Astropulse database?

Matt had talked about a possible come back in mid-june.

The AP database has been online for months. You can check the server status here. What is happening in the background is the clean up of the data and then new data we have processed since it was restarted will be merged with the previous.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1694568 · Report as offensive
Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · 12 . . . 30 · Next

Message boards : Number crunching : Panic Mode On (98) Server Problems?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.