Message boards :
Number crunching :
SETI Hardware Fundraisers (With Big News Inside)
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · Next
Author | Message |
---|---|
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
One way would be to get something like an ISP to donate bandwidth. We can't get that bandwidth to the lab, but we could place a proxy/workunit storage at the ISP. At some stage the data has to get back to the lab, and come from the lab, and the bottleneck to the lab still remains. Grant Darwin NT |
Sakletare Send message Joined: 18 May 99 Posts: 132 Credit: 23,423,829 RAC: 0 |
One way would be to get something like an ISP to donate bandwidth. We can't get that bandwidth to the lab, but we could place a proxy/workunit storage at the ISP. Each workunit is sent to, at least, two users. If the workunit is sent once to an off-site server from which the users download their workunits the lab would use less than 50% of their current bandwidth. It's the workunit distribution that's the big problem, the results files returning to the lab are much smaller. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
One way would be to get something like an ISP to donate bandwidth. We can't get that bandwidth to the lab, but we could place a proxy/workunit storage at the ISP. And the savings in bandwidth there would be taken up by the bandwidth that is now internal in the lab in between server communications. It's a case of either getting bandwidth available to the Lab, or moving everything to where the bandwidth is available. Grant Darwin NT |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
One way would be to get something like an ISP to donate bandwidth. We can't get that bandwidth to the lab, but we could place a proxy/workunit storage at the ISP. SETI@home already benefits from a cost-effective ISP (Hurricane Electric) who provide a gigabit service to their termination point. The trouble is, their termination point is inside the PAIX, which is, by its very nature, a highly-secure, restricted access, building. SETI does have one item of (donated) equipment in there - a tunnelling router - but anything more would require the allocation of rack space, power, cooling.... probably all in short supply in what is primarily a switching, rather than data, centre. I think your idea would work, at a technical level - effectively putting a 'data doubler' on the most congested part of the network - but at the cost of added complexity, especially in the extreme case where hardware maintenance is needed urgently (what happens when a disk fails, for instance?). I think, as so often before, this comes into the category of "moving the problem from one place to another". |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
And let's not forget that the integrity of the science must be maintained at all costs. Having all data stored and kept in the main Seti site is foremost. I know that the idea has been floated to offload the WUs to a secondary server off of the campus where access might be better to the users. But, I don't see this happening. We ARE dealing with true science here, folks. And I don't think that should be trivialized. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Sakletare Send message Joined: 18 May 99 Posts: 132 Credit: 23,423,829 RAC: 0 |
SETI@home already benefits from a cost-effective ISP (Hurricane Electric) who provide a gigabit service to their termination point. Well, the "proxy" wouldn't have to be in a high security location, as long as there's rack space and bandwidth available. It wouldn't even have to be in the same hemisphere as the lab as long as there's a S@H volunteer around to handle hands-on tasks. Obviously it's adding complexity to the system, but it's one way around the seemingly impassable obstacle of S@H lab bandwidth. It really shouldn't require much physical handling. Everything other than hardware failure can be handled remotely. And the server could be built to be failure resilient with storage arrays running hot spares, etc. |
Sakletare Send message Joined: 18 May 99 Posts: 132 Credit: 23,423,829 RAC: 0 |
And let's not forget that the integrity of the science must be maintained at all costs. I'm sorry, I don't understand your reasoning. Are you afraid that someone would try to hack the server and manipulate the data if it was located outside of the lab? |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
It wouldn't even have to be in the same hemisphere as the lab as long as there's a S@H volunteer around to handle hands-on tasks. Dude....\ If there were more staff, would we even be having some of these discussions? I don't think I can help much from here in Wisconsin to assist the administration of things. If y'all got any ideas how I could, I'm all ears. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
Well, the "proxy" wouldn't have to be in a high security location, as long as there's rack space and bandwidth available. I think that to reduce latency, having it as close as possible to both the data source (i.e. the lab), and to the high-bandwidth ISP, would be indicated. That points to a pretty secure location, whoever hosts it - I doubt many SETI volunteers have that sort of geography, and also day-to-day access to a building with a spare continuous 200Mbit+ outbound internet service. |
Sakletare Send message Joined: 18 May 99 Posts: 132 Credit: 23,423,829 RAC: 0 |
It wouldn't even have to be in the same hemisphere as the lab as long as there's a S@H volunteer around to handle hands-on tasks. The hands-on tasks wouldn't be too much of a hardship. More like a server shipped to the location and after a few years when enough storage array HDDs have failed and we are running out of hot spares the volunteer (or an employee of the hosting site) would replace the broken ones. |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Well, the "proxy" wouldn't have to be in a high security location, as long as there's rack space and bandwidth available. LOL.....I don't think my ATT DSL would get that done. It keeps 9 crunchers locked and loaded, but a few hundred thousand? I don't think so. Besides, even if I could get the infrastructure on the premises, I'd have to start doing my own fundraisers to finance it. Calling all kitties.......HELP..... "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Sakletare Send message Joined: 18 May 99 Posts: 132 Credit: 23,423,829 RAC: 0 |
Well, the "proxy" wouldn't have to be in a high security location, as long as there's rack space and bandwidth available. You wouldn't have to have day-to-day access to a hosting site, or even access at all. High security sites that hosts external servers can remove the server from the server room to a workshop where you can work with it before being returned. And as I said above it wouldn't be very often. So you wouldn't have to find a S@H volunteer with that kind of access. "Just" find a company willing to donate bandwidth, and hopefully find a S@H volunteer within driving distance of the site. |
Horacio Send message Joined: 14 Jan 00 Posts: 536 Credit: 75,967,266 RAC: 0 |
I think your idea would work, at a technical level - effectively putting a 'data doubler' on the most congested part of the network - but at the cost of added complexity, especially in the extreme case where hardware maintenance is needed urgently (what happens when a disk fails, for instance?). I think, as so often before, this comes into the category of "moving the problem from one place to another". Unless, the "download proxy server" were a cloudy virtual server donated by some generous hosting site... In that case there wont be "hardware issues" to deal with and any software issue will be easy to handle from anywhere around the world ... even from the notebook of Matt placed next to the scenario where he is performing their music... ;D (Anyway the complex thing here will be to find such a generous hosting site willing to donate the disk space and the bandwith needed, for free and forever...o well, at least I can dream...) |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
I think your idea would work, at a technical level - effectively putting a 'data doubler' on the most congested part of the network - but at the cost of added complexity, especially in the extreme case where hardware maintenance is needed urgently (what happens when a disk fails, for instance?). I think, as so often before, this comes into the category of "moving the problem from one place to another". Well, Seti IS about dreaming, ain't it now? "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Blurf Send message Joined: 2 Sep 06 Posts: 8962 Credit: 12,678,685 RAC: 0 |
Dude....\ This has been discussed with Eric and he said it would have to be such a tremendous donation drive that it's impossible. The drive would have to include the cost of benefits so I believe it'd be over $100,000 just for 1 full-time person for 1 year. |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
I think your idea would work, at a technical level - effectively putting a 'data doubler' on the most congested part of the network - but at the cost of added complexity, especially in the extreme case where hardware maintenance is needed urgently (what happens when a disk fails, for instance?). I think, as so often before, this comes into the category of "moving the problem from one place to another". The SETI download pipe is 100 Mbits per sec or 12.5 MBytes per sec. 45 GBytes per hour. If a caching proxy had that much cache space it would be enough to handle the initial replication downloads under most circumstances. So instead of averaging ~2.15 downloads per SaH WU and ~2.78 per AP WU, the download servers would only be burdened for ~1.15 and ~1.78. Squid could do that kind of caching proxy. Hurricane Electric has two data centers in Fremont, California which provide colocation services to large companies, some of which might have the relatively minor additional capacity SaH would use. Or HE itself might be able to provide an affordable deal since they're already SaH's ISP. One concern I'd have is that even effectively increasing the amount of work which could be delivered by 60 or 70 percent wouldn't help for long. Users love to buy new, more capable systems so pretty soon the download pipe would again be saturated and mainly controlled by internet congestion. Nevertheless, that much increase of project productivity seems worth considering if it doesn't break the budget. Joe |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
Dude....\ If fundraising for staff salaries is ever considered, I'd far prefer to see the establishment of an endowment fund that would fund the post for an extended period (minimum five years), or in perpetuity from investment income. Otherwise, you're condemning someone to a dead-end job - either working towards their own redundancy, or constantly raising funds towards their own continuation. Neither is pretty, or conducive to a productive and enjoyable working environment. The figures would be frightening, but I still like to see what the US actuarial figures would currently be for a one-salary permanent endowment. |
Donald L. Johnson Send message Joined: 5 Aug 02 Posts: 8240 Credit: 14,654,533 RAC: 20 |
And let's not forget that the integrity of the science must be maintained at all costs. Yes, that is one of the concerns. In scientific research, integrity of the data is sometimes as important as chain-of-custody is for evidence in a criminal prosecution. Donald Infernal Optimist / Submariner, retired |
Eric Korpela Send message Joined: 3 Apr 99 Posts: 1382 Credit: 54,506,847 RAC: 60 |
I'd far prefer to see the establishment of an endowment fund that would fund the post for an extended period (minimum five years), or in perpetuity from investment income. We would love to see an endowment capable of funding the whole program or even providing a single income. We have had term endowments in the past in the $10K-$30K range with 2 or 3 year terms to pay for operations of specific projects. To pay for a person is another scale entirely... a just out of school junior programmer with salary and benefits really does run close to $100K even in the current job market. Pulling out my trusty slide rule you wanted to pay for that for 5 years, with a 2% annual COLA and no promotions (which is unreasonable, a new programmer would get a promotion every 2 years or get fired) that's $520K, the endowment to achieve that would need to about about $480K (assuming it could earn 4% annually) A perpetual endowment usually has different assumptions and can achieve higher growth, but we could assume the instantaneous payout of $100K is equal to a 4% growth and which means it would need to be at least $2.5M. And to run the whole project in perpetuity $10M-$15M would be the target vicinity. Sounds like a lot until you realize that people spent $4B on the last election. Anyone know any multi-million dollar superPAC contributors? Of course people who contribute millions of dollars to superPACs probably expect a return on their investment. (In case I didn't the sarcasm obvious enough, I think $2.5M is a hell of a lot of money.) @SETIEric@qoto.org (Mastodon) |
Slavac Send message Joined: 27 Apr 11 Posts: 1932 Credit: 17,952,639 RAC: 0 |
Just a note, I heard back from Jeff regarding Synergy's memory allocation. His response below: Thank you for thinking about this. I think that synergy is OK, memory wise. Looking at swap activity over time (with vmstat), I see little to no paging. ____ Translated that basically means more memory likely won't help things along. With that said we're looking at upgrading PaddyM's memory from its current 128G but mostly that's just planning ahead for future needs. Also our member donated 3GB hard drive was delivered to the Lab today. That's 5 shiny new 3GB drives in about a month, well done guys and girls :) Executive Director GPU Users Group Inc. - brad@gpuug.org |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.