Message boards :
Number crunching :
A Modest Proposal for Easy FREE Bandwidth Relief
Message board moderation
Previous · 1 · 2 · 3
Author | Message |
---|---|
bill Send message Joined: 16 Jun 99 Posts: 861 Credit: 29,352,955 RAC: 0 |
"because we are currently evaluating the pros and cons of moving our server farm to a colocation facility on campus. We haven't decided one way or another yet, as we still have to determine costs and feasibility of moving our Hurricane Electric connection down on campus (where the facility is located). If we do end up making the leap, we immediately gain (a) better air conditioning without worry, (b) full UPS without worry, and (c) much better remote kvm access without worry (our current situation is wonky at best). Maybe we'll also get more bandwidth (that's a big maybe). Plus they have staff on hand to kick machines if necessary. This would vastly free up time and mental bandwidth so Jeff, Eric, and I can work on other things, like science! The con of course is the inconvenience if we do have to be hands-on with a broken server. Anyway, exciting times! This wouldn't be possible, of course, without many recent server upgrades that vastly reduced our physical footprint (or rackprint), thus bringing rack space rental at the colo within a reasonable limit." Posted by Matt, Jan 30,2013 |
tbret Send message Joined: 28 May 99 Posts: 3380 Credit: 296,162,071 RAC: 40 |
If on the other hand the current setup can handle the data as fast or faster than it is generated at Arecibo, even with some of the day to day problems we are having, what is the point? If the drives with data are not stacking up higher and higher on the shelf, then there is no real incentive or strong reason to do things differently. My understanding was that there is a lot of data from Green Bank that we have yet to see. So, even if we were chewing through data faster than it is collected at Arecibo, I think there is still a lot of motivation to expand the system's crunching capacity. If I remember correctly, there is a huge amount of data available from Green Bank observations. |
.clair. Send message Joined: 4 Nov 04 Posts: 1300 Credit: 55,390,408 RAC: 69 |
One thing is certain is that every year as our crunchers are upgraded we get closer to crunching faster than collecting data i think Matt [or someone in the lab] did comment on this some time ago. It was not long ago that a router in the lab was upgraded because it was only capable of running at 66 megabit (or something like that) and we where stuck with it as a bottleneck in our crunching craziness, How little time has it taken us to hit the bandwith wall again ? Many of us, deffinetly myself, can now crunch it faster than i can get new work. Greenbank data gives the project a boost to keep ahead of us on collection of work, That will not last, In a few years with all the new GPU and CPU kit that we will buy and burn our credit cards with we will soon catch up again. Doubling the download bandwith would give us a short respite in geting work to crunch, and not long after that it will have to double up again just to keep up with the SETI@home super computer that we are building, and in years to come even a gigabit link will not keep us fed with work. Looking at DC projects around the planet which is the bigest of them all ? I can not find a list anywhere that lays out that info on a level playing field. EDIT http://en.wikipedia.org/wiki/List_of_distributed_computing_projects i dont know when this was last updated. Just had a think Ok i know that is a dodgy thing for me to do:) What is the S@H super computer capable of in petaflops or some kind of mesure (when working at average/peak speed) Einstein@home has a guestimate system of working it out. |
Link Send message Joined: 18 Sep 03 Posts: 834 Credit: 1,807,369 RAC: 0 |
What is the S@H super computer capable of in petaflops or some kind of mesure (when working at average/peak speed) According to the project status page we are now at 620,069 GigaFLOPs. |
.clair. Send message Joined: 4 Nov 04 Posts: 1300 Credit: 55,390,408 RAC: 69 |
According to the project status page we are now at 620,069 GigaFLOPs. Thanks for the link Link :) i seem to have missed that one, bookmarked it now. |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 65709 Credit: 55,293,173 RAC: 49 |
It's probably a reasonable idea for a temporary workaround. Well in that case, then shorties could be used as fill on gpus until other more suitable types show up, otherwise they could be used on the cpu, I won't object to a more flexible allocation like this, if it were done of course. The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's |
Wales_Family Send message Joined: 5 Sep 11 Posts: 9 Credit: 3,181,278 RAC: 8 |
Has anyone considered the options of splitting and distributing the work units from a location other than berkeley? It will then redirect a good amount of 'outbound' data away from these overloaded berkeley bandwidth. The completed WUs will only be reported back to berkeley. Yes - see http://setiathome.berkeley.edu/forum_thread.php?id=69594&postid=1291854 |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.