Largest single system to SCALE SETI?

Message boards : Number crunching : Largest single system to SCALE SETI?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20334
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1085903 - Posted: 11 Mar 2011, 1:49:13 UTC - in response to Message 1085882.  
Last modified: 11 Mar 2011, 1:49:54 UTC

Where is the photo of the actual machine?


It's just a boring looking box like most servers...

See:

Wikipedia: HP Superdome

The boxes look to be a little broader than Steve Ballmer! And is Bry B in amongst some of the pictures?...

Also, this little test has already prompted Dr A to update the Boinc system for Windows hosts for such a behemoth.


Aside: (For some geekie enthusiasm!)

Rather interestingly, it's also uncovered that (most likely) the Windows scheduler can only cope with a maximum of 64 logical processors (or execution threads?) in one gulp and uses multiple schedulers to schedule groups of logical processors of up to 64 at a time in isolation... A very different strategy to the O(1) and more recently the impressive O(log n) Completely Fair schedulers in Linux that accept up to 1024 logical processors in one gulp! And I wouldn't be surprised if there are patches to handle even more...


Happy fast parallel crunchin'!,
Martin


In this context:

O(1) means that regardless of the number of tasks and CPUs, the scheduler always takes at most a fixed time to schedule the processes amongst the processors. That is, increasing the number of tasks doesn't degrade the scheduler;

O(log n) describes choosing a task to run next can be done in constant time, but reinserting a task after it has run requires O(log N) operations/time. Again, this in effect describes how increasing the number of tasks doesn't degrade the scheduler (unless you get up to unrealistic numbers).
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1085903 · Report as offensive
Profile Pappa
Volunteer tester
Avatar

Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 1085965 - Posted: 11 Mar 2011, 3:47:12 UTC - in response to Message 1085903.  

Nice Catch Martin

I did like the part in 2005, " The Hewlett-Packard Company (2005-01-17). "HP Showcasing Itanium-based Superdome Server Running HP-UX, Windows and Linux Concurrently"."

This showed they were smart enough to know which Processor/OS would meet the customers needs.

It would still be nice to wind it up for NITPCKR. As a part of the query has to reach into the database to find adjacent pixels and/or repeat for the same pixel. The concurrent sharing between cells could allow the cached information to be reprocessed, rather than "re-retieved data." The total RAM in the machine would hold just about the entire returned results. It would not matter that it took 4 plus days to sort retieved data to various processors/cells. As it was all loaded the just say GO!

Pappa


Please consider a Donation to the Seti Project.

ID: 1085965 · Report as offensive
Profile Todd Hebert
Volunteer tester
Avatar

Send message
Joined: 16 Jun 00
Posts: 648
Credit: 228,292,957
RAC: 0
United States
Message 1086067 - Posted: 11 Mar 2011, 14:25:05 UTC - in response to Message 1085642.  
Last modified: 11 Mar 2011, 14:25:28 UTC

Put it this way - if Seti@Home had one of these in its closet it could easily handle all of the needs of this project for the next 5-7 years. That would be considering the increase of 10% of the user base per year. And baring system upgrades that would require a restart it would be fully available 24/7.

This is not your average system by any means! I had the pleasure of working on one of the original SuperDome's and was totally in awe. And I am a hardware guy that has also worked on Cray systems so I'm not easily impressed.

Todd
World RAC Leader

I look at this and wonder... IF arragnements could be made to get the correct ferdora core installed, the Seti Staff flies in with a duplicate of the Master Science Database (With NITPCKR). IN about a Month, analysis of ALL the returned results could be almost complete... Along with some analysis of the best reobservation candidates. Heck, it might even find ET....
That would be one Hell of a Horah!!!!
Pappa


Why should be one single computer so much better?
Its 'just' 64 Intel-Itanium-Processors with 256 cores.
The Top25-Computers have 258 Cores (just the CPUs).

ID: 1086067 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14653
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1086081 - Posted: 11 Mar 2011, 15:32:56 UTC

Given that SETI has recently purchased (at a discount, but still purchased) two Proliant DL180 G6 servers (Oscar and Carolyn) - not in the Superdome class, but non-trivial servers nonetheless - and given that in Bry B we have someone with, clearly, good contacts in the server division of HP, I wonder if there are any questions and answers either side could profitably ask of the other.....?

Just dropping that thought into the conversation....
ID: 1086081 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20334
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1086156 - Posted: 11 Mar 2011, 19:55:08 UTC - in response to Message 1086081.  
Last modified: 11 Mar 2011, 20:01:54 UTC

... Just dropping that thought into the conversation....


That could make for some impressive testing and some astronomical publicity for all concerned... All on a real-world BIG-problem task AND furthering various threads of all of seti/computer/database science... ;-)

Interesting idea...

Especially if it could validate the claim of gulping the master science database entirely into RAM and the interconnect then survives the database load searching through that lot...

All that would be needed next would be some local storage fast enough to keep the database busy regardless of the database logs bottleneck. (I hope Berkeley have looked at how well their database is or can be parallised...)


Keep searchin',
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1086156 · Report as offensive
-BeNt-
Avatar

Send message
Joined: 17 Oct 99
Posts: 1234
Credit: 10,116,112
RAC: 0
United States
Message 1086174 - Posted: 11 Mar 2011, 21:30:26 UTC - in response to Message 1086081.  

Given that SETI has recently purchased (at a discount, but still purchased) two Proliant DL180 G6 servers (Oscar and Carolyn) - not in the Superdome class, but non-trivial servers nonetheless - and given that in Bry B we have someone with, clearly, good contacts in the server division of HP, I wonder if there are any questions and answers either side could profitably ask of the other.....?

Just dropping that thought into the conversation....


I'm sure they have contacts at HP, anyone who purchases the type of servers they are buying does. However having an on hand conversation with someone who tests servers for HP could be beneficial since you are stepping beyond the "script" of support.
Traveling through space at ~67,000mph!
ID: 1086174 · Report as offensive
Previous · 1 · 2

Message boards : Number crunching : Largest single system to SCALE SETI?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.