A Modest Proposal for Easy FREE Bandwidth Relief


log in

Advanced search

Message boards : Number crunching : A Modest Proposal for Easy FREE Bandwidth Relief

Previous · 1 · 2 · 3
Author Message
bill
Send message
Joined: 16 Jun 99
Posts: 861
Credit: 24,308,194
RAC: 7,177
United States
Message 1336938 - Posted: 11 Feb 2013, 3:47:38 UTC

"because we are currently evaluating the pros and cons of moving our server farm to a colocation facility on campus. We haven't decided one way or another yet, as we still have to determine costs and feasibility of moving our Hurricane Electric connection down on campus (where the facility is located). If we do end up making the leap, we immediately gain (a) better air conditioning without worry, (b) full UPS without worry, and (c) much better remote kvm access without worry (our current situation is wonky at best). Maybe we'll also get more bandwidth (that's a big maybe). Plus they have staff on hand to kick machines if necessary. This would vastly free up time and mental bandwidth so Jeff, Eric, and I can work on other things, like science! The con of course is the inconvenience if we do have to be hands-on with a broken server. Anyway, exciting times! This wouldn't be possible, of course, without many recent server upgrades that vastly reduced our physical footprint (or rackprint), thus bringing rack space rental at the colo within a reasonable limit."

Posted by Matt, Jan 30,2013

tbretProject donor
Volunteer tester
Avatar
Send message
Joined: 28 May 99
Posts: 2907
Credit: 218,688,354
RAC: 12,708
United States
Message 1336980 - Posted: 11 Feb 2013, 6:39:26 UTC - in response to Message 1336888.

If on the other hand the current setup can handle the data as fast or faster than it is generated at Arecibo, even with some of the day to day problems we are having, what is the point? If the drives with data are not stacking up higher and higher on the shelf, then there is no real incentive or strong reason to do things differently.



My understanding was that there is a lot of data from Green Bank that we have yet to see. So, even if we were chewing through data faster than it is collected at Arecibo, I think there is still a lot of motivation to expand the system's crunching capacity.

If I remember correctly, there is a huge amount of data available from Green Bank observations.



.clair.
Volunteer moderator
Send message
Joined: 4 Nov 04
Posts: 1300
Credit: 23,721,439
RAC: 33,963
United Kingdom
Message 1337047 - Posted: 11 Feb 2013, 14:43:55 UTC
Last modified: 11 Feb 2013, 14:59:00 UTC

One thing is certain is that every year as our crunchers are upgraded we get closer to crunching faster than collecting data i think Matt [or someone in the lab] did comment on this some time ago.
It was not long ago that a router in the lab was upgraded because it was only capable of running at 66 megabit (or something like that) and we where stuck with it as a bottleneck in our crunching craziness,
How little time has it taken us to hit the bandwith wall again ?
Many of us, deffinetly myself, can now crunch it faster than i can get new work.
Greenbank data gives the project a boost to keep ahead of us on collection of work,
That will not last,
In a few years with all the new GPU and CPU kit that we will buy and burn our credit cards with we will soon catch up again.
Doubling the download bandwith would give us a short respite in geting work to crunch, and not long after that it will have to double up again just to keep up with the SETI@home super computer that we are building,
and in years to come even a gigabit link will not keep us fed with work.

Looking at DC projects around the planet which is the bigest of them all ?
I can not find a list anywhere that lays out that info on a level playing field.
EDIT http://en.wikipedia.org/wiki/List_of_distributed_computing_projects i dont know when this was last updated.

Just had a think Ok i know that is a dodgy thing for me to do:)
What is the S@H super computer capable of in petaflops or some kind of mesure (when working at average/peak speed)
Einstein@home has a guestimate system of working it out.

Profile Link
Avatar
Send message
Joined: 18 Sep 03
Posts: 841
Credit: 1,578,326
RAC: 52
Germany
Message 1337110 - Posted: 11 Feb 2013, 17:28:09 UTC - in response to Message 1337047.

What is the S@H super computer capable of in petaflops or some kind of mesure (when working at average/peak speed)

According to the project status page we are now at 620,069 GigaFLOPs.
____________
.

.clair.
Volunteer moderator
Send message
Joined: 4 Nov 04
Posts: 1300
Credit: 23,721,439
RAC: 33,963
United Kingdom
Message 1337253 - Posted: 11 Feb 2013, 23:51:13 UTC - in response to Message 1337110.

According to the project status page we are now at 620,069 GigaFLOPs.

Thanks for the link Link :) i seem to have missed that one, bookmarked it now.

zoom314Project donor
Volunteer tester
Avatar
Send message
Joined: 30 Nov 03
Posts: 47158
Credit: 37,090,110
RAC: 4,528
United States
Message 1337313 - Posted: 12 Feb 2013, 4:09:59 UTC - in response to Message 1334670.

It's probably a reasonable idea for a temporary workaround.

Note however that even before there was an Astropulse application or any GPU crunching a shortie storm caused problems. That was likely with Thumper and Jocelyn doing BOINC database duties, Oscar and Carolyn have better capabilities (but so do the CPUs on users' hosts).

The other negative point is there would likely be times when GPUs would run dry because all tapes being split were producing only VLAR or VHAR tasks.
Joe

Well in that case, then shorties could be used as fill on gpus until other more suitable types show up, otherwise they could be used on the cpu, I won't object to a more flexible allocation like this, if it were done of course.
____________
My Facebook, War Commander, 2015

Wales_Family
Send message
Joined: 5 Sep 11
Posts: 3
Credit: 456,392
RAC: 301
New Zealand
Message 1339011 - Posted: 17 Feb 2013, 0:35:16 UTC - in response to Message 1334744.
Last modified: 17 Feb 2013, 0:38:09 UTC

Has anyone considered the options of splitting and distributing the work units from a location other than berkeley? It will then redirect a good amount of 'outbound' data away from these overloaded berkeley bandwidth. The completed WUs will only be reported back to berkeley.


Yes - see http://setiathome.berkeley.edu/forum_thread.php?id=69594&postid=1291854

Previous · 1 · 2 · 3

Message boards : Number crunching : A Modest Proposal for Easy FREE Bandwidth Relief

Copyright © 2014 University of California