A Modest Proposal for Easy FREE Bandwidth Relief

Message boards : Number crunching : A Modest Proposal for Easy FREE Bandwidth Relief
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3

AuthorMessage
bill

Send message
Joined: 16 Jun 99
Posts: 861
Credit: 29,352,955
RAC: 0
United States
Message 1336938 - Posted: 11 Feb 2013, 3:47:38 UTC

"because we are currently evaluating the pros and cons of moving our server farm to a colocation facility on campus. We haven't decided one way or another yet, as we still have to determine costs and feasibility of moving our Hurricane Electric connection down on campus (where the facility is located). If we do end up making the leap, we immediately gain (a) better air conditioning without worry, (b) full UPS without worry, and (c) much better remote kvm access without worry (our current situation is wonky at best). Maybe we'll also get more bandwidth (that's a big maybe). Plus they have staff on hand to kick machines if necessary. This would vastly free up time and mental bandwidth so Jeff, Eric, and I can work on other things, like science! The con of course is the inconvenience if we do have to be hands-on with a broken server. Anyway, exciting times! This wouldn't be possible, of course, without many recent server upgrades that vastly reduced our physical footprint (or rackprint), thus bringing rack space rental at the colo within a reasonable limit."

Posted by Matt, Jan 30,2013
ID: 1336938 · Report as offensive
tbret
Volunteer tester
Avatar

Send message
Joined: 28 May 99
Posts: 3380
Credit: 296,162,071
RAC: 40
United States
Message 1336980 - Posted: 11 Feb 2013, 6:39:26 UTC - in response to Message 1336888.  

If on the other hand the current setup can handle the data as fast or faster than it is generated at Arecibo, even with some of the day to day problems we are having, what is the point? If the drives with data are not stacking up higher and higher on the shelf, then there is no real incentive or strong reason to do things differently.



My understanding was that there is a lot of data from Green Bank that we have yet to see. So, even if we were chewing through data faster than it is collected at Arecibo, I think there is still a lot of motivation to expand the system's crunching capacity.

If I remember correctly, there is a huge amount of data available from Green Bank observations.



ID: 1336980 · Report as offensive
.clair.

Send message
Joined: 4 Nov 04
Posts: 1300
Credit: 55,390,408
RAC: 69
United Kingdom
Message 1337047 - Posted: 11 Feb 2013, 14:43:55 UTC
Last modified: 11 Feb 2013, 14:59:00 UTC

One thing is certain is that every year as our crunchers are upgraded we get closer to crunching faster than collecting data i think Matt [or someone in the lab] did comment on this some time ago.
It was not long ago that a router in the lab was upgraded because it was only capable of running at 66 megabit (or something like that) and we where stuck with it as a bottleneck in our crunching craziness,
How little time has it taken us to hit the bandwith wall again ?
Many of us, deffinetly myself, can now crunch it faster than i can get new work.
Greenbank data gives the project a boost to keep ahead of us on collection of work,
That will not last,
In a few years with all the new GPU and CPU kit that we will buy and burn our credit cards with we will soon catch up again.
Doubling the download bandwith would give us a short respite in geting work to crunch, and not long after that it will have to double up again just to keep up with the SETI@home super computer that we are building,
and in years to come even a gigabit link will not keep us fed with work.

Looking at DC projects around the planet which is the bigest of them all ?
I can not find a list anywhere that lays out that info on a level playing field.
EDIT http://en.wikipedia.org/wiki/List_of_distributed_computing_projects i dont know when this was last updated.

Just had a think Ok i know that is a dodgy thing for me to do:)
What is the S@H super computer capable of in petaflops or some kind of mesure (when working at average/peak speed)
Einstein@home has a guestimate system of working it out.
ID: 1337047 · Report as offensive
Profile Link
Avatar

Send message
Joined: 18 Sep 03
Posts: 834
Credit: 1,807,369
RAC: 0
Germany
Message 1337110 - Posted: 11 Feb 2013, 17:28:09 UTC - in response to Message 1337047.  

What is the S@H super computer capable of in petaflops or some kind of mesure (when working at average/peak speed)

According to the project status page we are now at 620,069 GigaFLOPs.
ID: 1337110 · Report as offensive
.clair.

Send message
Joined: 4 Nov 04
Posts: 1300
Credit: 55,390,408
RAC: 69
United Kingdom
Message 1337253 - Posted: 11 Feb 2013, 23:51:13 UTC - in response to Message 1337110.  

According to the project status page we are now at 620,069 GigaFLOPs.

Thanks for the link Link :) i seem to have missed that one, bookmarked it now.
ID: 1337253 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65709
Credit: 55,293,173
RAC: 49
United States
Message 1337313 - Posted: 12 Feb 2013, 4:09:59 UTC - in response to Message 1334670.  

It's probably a reasonable idea for a temporary workaround.

Note however that even before there was an Astropulse application or any GPU crunching a shortie storm caused problems. That was likely with Thumper and Jocelyn doing BOINC database duties, Oscar and Carolyn have better capabilities (but so do the CPUs on users' hosts).

The other negative point is there would likely be times when GPUs would run dry because all tapes being split were producing only VLAR or VHAR tasks.
                                                                  Joe

Well in that case, then shorties could be used as fill on gpus until other more suitable types show up, otherwise they could be used on the cpu, I won't object to a more flexible allocation like this, if it were done of course.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1337313 · Report as offensive
Wales_Family

Send message
Joined: 5 Sep 11
Posts: 9
Credit: 3,181,278
RAC: 8
New Zealand
Message 1339011 - Posted: 17 Feb 2013, 0:35:16 UTC - in response to Message 1334744.  
Last modified: 17 Feb 2013, 0:38:09 UTC

Has anyone considered the options of splitting and distributing the work units from a location other than berkeley? It will then redirect a good amount of 'outbound' data away from these overloaded berkeley bandwidth. The completed WUs will only be reported back to berkeley.


Yes - see http://setiathome.berkeley.edu/forum_thread.php?id=69594&postid=1291854
ID: 1339011 · Report as offensive
Previous · 1 · 2 · 3

Message boards : Number crunching : A Modest Proposal for Easy FREE Bandwidth Relief


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.