server upload /download settings

Message boards : Number crunching : server upload /download settings
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile Peter M. Ferrie
Volunteer tester

Send message
Joined: 28 Mar 03
Posts: 86
Credit: 9,967,062
RAC: 0
United States
Message 1177637 - Posted: 13 Dec 2011, 2:06:14 UTC

i suggested this idea in 2009 but it was swept under the rug, anyway here goes

if the upload/download server was set to allow 1 upload/download per computer , would that cut the bandwidth in use almost in half as it seems that 2 per computer max's out the download bandwidth it seams that this may ease the strain a little on the connection.

for those who say what about the people on dialup , boo hoo to you .
98% of the project is on some kind of broadband connection. so to quote Spock " the needs of the many outweigh the needs of the few... or the one. "


lest's discuss...
ID: 1177637 · Report as offensive
Profile Khangollo
Avatar

Send message
Joined: 1 Aug 00
Posts: 245
Credit: 36,410,524
RAC: 0
Slovenia
Message 1177639 - Posted: 13 Dec 2011, 2:12:25 UTC
Last modified: 13 Dec 2011, 2:19:54 UTC

I think they should just ditch the Apache web server and replace it with something that can handle large number of connections better. At least on download servers, for starters.
Massive amount of bandwidth is being wasted on continuous retrying, because most downloads (and now uploads, too) stall and fail.
ID: 1177639 · Report as offensive
Profile SciManStev Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Jun 99
Posts: 6652
Credit: 121,090,076
RAC: 0
United States
Message 1177645 - Posted: 13 Dec 2011, 2:49:53 UTC
Last modified: 13 Dec 2011, 3:23:20 UTC

The potential scope of this project is enormous, as is the amount of data that might need to be gone through before finding a sign of intelligence. There are many who have built computers to analyze a great deal of data. Many of these computers crunch hundreds or even thousands of work units a day. Limiting the project as you suggest, would drive those most productive hosts away, and considerably lessen the chance of finding something interesting. I think the hardware and bandwidth need to be ramped up considerably to allow more of that data to be analyzed quickly. There are several fund raising efforts under way to do just that.

Seti was originally designed to work on a very small budget, but the scope of the project, and the interest in it, have far exceeded the original intent.

Steve
Warning, addicted to SETI crunching!
Crunching as a member of GPU Users Group.
GPUUG Website
ID: 1177645 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34754
Credit: 261,360,520
RAC: 489
Australia
Message 1177647 - Posted: 13 Dec 2011, 2:52:03 UTC - in response to Message 1177637.  

i suggested this idea in 2009 but it was swept under the rug, anyway here goes

if the upload/download server was set to allow 1 upload/download per computer , would that cut the bandwidth in use almost in half as it seems that 2 per computer max's out the download bandwidth it seams that this may ease the strain a little on the connection.

for those who say what about the people on dialup , boo hoo to you .
98% of the project is on some kind of broadband connection. so to quote Spock " the needs of the many outweigh the needs of the few... or the one. "


lest's discuss...

That would never work with a half decent PC (even some half un-decent ones) as that would cause more problems than we have now so forget it (I've just forgotten your suggestion already) which would be why it never went any further.

Cheers.
ID: 1177647 · Report as offensive
bill

Send message
Joined: 16 Jun 99
Posts: 861
Credit: 29,352,955
RAC: 0
United States
Message 1177654 - Posted: 13 Dec 2011, 3:23:01 UTC - in response to Message 1177637.  

S@H should implement a high speed line for the heavy users.
Then charge for usage.
ID: 1177654 · Report as offensive
Profile SciManStev Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Jun 99
Posts: 6652
Credit: 121,090,076
RAC: 0
United States
Message 1177655 - Posted: 13 Dec 2011, 3:25:50 UTC - in response to Message 1177654.  

S@H should implement a high speed line for the heavy users.
Then charge for usage.

You would charge people for volunteering their computers and electricty?

Steve
Warning, addicted to SETI crunching!
Crunching as a member of GPU Users Group.
GPUUG Website
ID: 1177655 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34754
Credit: 261,360,520
RAC: 489
Australia
Message 1177657 - Posted: 13 Dec 2011, 3:36:27 UTC - in response to Message 1177654.  

S@H should implement a high speed line for the heavy users.
Then charge for usage.

What determines a "heavy user"?

That could be anyone with a RAC over 2000.

Cheers.
ID: 1177657 · Report as offensive
bill

Send message
Joined: 16 Jun 99
Posts: 861
Credit: 29,352,955
RAC: 0
United States
Message 1177659 - Posted: 13 Dec 2011, 3:43:20 UTC - in response to Message 1177657.  

S@H should implement a high speed line for the heavy users.
Then charge for usage.

What determines a "heavy user"?

That could be anyone with a RAC over 2000.

Cheers.


Works for me. if you don't want to pay for good, high speed
throughput just use the regular server like we are now.
ID: 1177659 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34754
Credit: 261,360,520
RAC: 489
Australia
Message 1177660 - Posted: 13 Dec 2011, 3:50:45 UTC - in response to Message 1177659.  

The thing is that other than the uploads backing up here a bit ATM I've pretty much had no problems at my end at all.

Cheers.
ID: 1177660 · Report as offensive
bill

Send message
Joined: 16 Jun 99
Posts: 861
Credit: 29,352,955
RAC: 0
United States
Message 1177661 - Posted: 13 Dec 2011, 3:56:20 UTC - in response to Message 1177660.  

The thing is that other than the uploads backing up here a bit ATM I've pretty much had no problems at my end at all.

Cheers.


Me neither. So both of us would be happy with the present
situation. Those that aren't would have an alternative.

If I dropped a bunch of benjamins to build a super cruncher
a couple of dollars a week shouldn't be to bad if I wanted to
keep it working 24/7.
ID: 1177661 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34754
Credit: 261,360,520
RAC: 489
Australia
Message 1177662 - Posted: 13 Dec 2011, 3:59:43 UTC - in response to Message 1177661.  

That would only be if you can afford a super cruncher and I know that I can't.

Cheers.
ID: 1177662 · Report as offensive
tbret
Volunteer tester
Avatar

Send message
Joined: 28 May 99
Posts: 3380
Credit: 296,162,071
RAC: 40
United States
Message 1177679 - Posted: 13 Dec 2011, 6:23:46 UTC - in response to Message 1177655.  

S@H should implement a high speed line for the heavy users.
Then charge for usage.

You would charge people for volunteering their computers and electricty?

Steve


I am NOT responding to the suggestion of a surcharge for a guaranteed high-speed connection to the SETI servers. ( I kinda like that, really - it would just be an additional expense like my electricity, the nVidia cards, the upgraded CPUs, the burned-out power supplies, etc. Then the non-surcharged could fight for whatever server capacity we weren't using-up; and ultimately although it might cost a few dollars more, it would prevent the waste of a fistful of dollars at my end.)

I AM responding to Steve's response.

Yes. Yes, some would. Look around. Everyone's out to "get" their rich boss. You know, the guy with the invested capital, with the most to lose if the business does poorly, and who created an enterprise, and their jobs.

We've become a world, not merely a nation, of...strange-thinking folks.

Me? I'd keep the top, oh, 40,000 crunchers and I'd cut the rest off. Produce or parish. It ain't about democracy and making nice. It's about finding an ET signal if one's out there.

Not so much now that we've imposed silly limits and BOINC is boinced, but before, when I read Scarecrow's graphs, I got the definite impression that most of the computers were doing less than 20% of the work while 10% were doing most of the work.

So, of course, let us punish those who do the most work in the name of "fairness."

Fine. Less work gets done and the slower crunching means a diminished chance of finding an intelligent signal and more electricity is wasted (since every machine has some "fixed" overhead, so an unproductive computer is a waste), and...

Wait. What's that I hear in the distance?

"Kumbaya, my lord, kumbaya..."

Quick, hide anything of value. Throw the bong to divert their attention.

Run.


ID: 1177679 · Report as offensive
bill

Send message
Joined: 16 Jun 99
Posts: 861
Credit: 29,352,955
RAC: 0
United States
Message 1177691 - Posted: 13 Dec 2011, 8:22:29 UTC - in response to Message 1177679.  

Per your response to Steve.

You have to remember you're not dealing with
a business. You're dealing with a Kalifornia
government funded university at Berkeley.
The ground zero of no cruncher left behind
doctrine. Kumbaya indeed.
ID: 1177691 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1177715 - Posted: 13 Dec 2011, 10:09:38 UTC

No cruncher left behind indeed.
As it should be.

Just think about what is being said here.

On one hand, it is suggested to cut off anybody other than the top producers.

On the other hand, it is suggested that the 'little' producers are causing the problems. So cut them off.

Now, if the little guys are really not collectively producing that much, they could not really be the problem.....

Which is it?

Face it...the bigger crunchers like myself are getting a little pissed right now because we are being limited by the project and various problems with the servers and bandwidth connecting us to them.

But I would not suggest that cutting anybody off from the project should be the answer to my angst.

The Seti infrastructure has started to be overwhelmed by it's own success and the ever increasing power of the computers attached to it. The only answer 'if' the project wants to continue to grow is to expand the infrastructure.

The GPUUG fund drive is aggressively pursuing that goal. We have raised enough for many hard drives and a raid array, and are well on our way to building another server. Please see the fundraising thread for more details. And contribute if you wish to and are able.

You must also realize that the current situation is the culmination of a poorly implemented Boinc 'fix' to a problem that frankly, affected very few of us. Myself included.

That resulted in limiting all caches so when the fix for the 'fix' took place even worse havoc might not ensue. And about that time, all the datasets from Arecibo started to be fast scan work with almost nothing but shorty WUs generated, resulting in even worse congestion.

So, the answer is not to cut anybody off, but to work towards a solution.

I hope most of us shall have enough patience to see it through.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1177715 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34754
Credit: 261,360,520
RAC: 489
Australia
Message 1177719 - Posted: 13 Dec 2011, 10:16:32 UTC - in response to Message 1177715.  

Personal I think that a fresh set of eyes with an excellent network engineering background is needed to go over the whole setup as I wouldn't be surprised that the system in its current form is not only unbalanced but probably DDoS'ing itself so there is no point to these ideas at all.

Cheers.
ID: 1177719 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1177722 - Posted: 13 Dec 2011, 10:26:06 UTC - in response to Message 1177719.  

Personal I think that a fresh set of eyes with an excellent network engineering background is needed to go over the whole setup as I wouldn't be surprised that the system in its current form is not only unbalanced but probably DDoS'ing itself so there is no point to these ideas at all.

Cheers.

I have suggested that several times in the past....
I have asked in the past if a true professional, enterprise grade server expert would not be willing to donate a week or so of their time to fully analyze what is going on with the Seti comms setup.
I have expressed many times, (NOT being a professional at all), just witnessing what goes on when my rigs try to communicate with the Seti servers, my belief that waaaaaaaaaaaaay too much of our limited bandwidth is being wasted by improper configuration. Watching a host try to connect time after time after time, download maybe 5% of a WU and then go into retry, repeat, repeat, repeat....
You get the idea. As the cowboy saying goes...a lot of hat, no cattle.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1177722 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22204
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1177728 - Posted: 13 Dec 2011, 10:56:16 UTC

I'm not sure the problems are solely to do with the network topology and network level configuration as I think some of the strategies within BOINC are not helping the problems clear.

From my own local observations (totally unscientific) it would appear that a WU that has re-tried and been held back for a long time is actually more likely to got back into retry wait again. This has two effects, first the increased management traffic to handle the retry, and second it clogs up the server with whatever bit of that WU is left (download), management flags and so on, and the more you get left on the server in this state the more likely you are to trigger retries because the server can't get the data to the port quick enough. Of course with uploads you may have sent the whole lot, only to have the "ACK" fail, and have to send it all again which is an obvious waste of bandwidth.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1177728 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1177998 - Posted: 14 Dec 2011, 9:33:46 UTC - in response to Message 1177728.  

I'm not sure the problems are solely to do with the network topology and network level configuration as I think some of the strategies within BOINC are not helping the problems clear.

From my own local observations (totally unscientific) it would appear that a WU that has re-tried and been held back for a long time is actually more likely to got back into retry wait again. This has two effects, first the increased management traffic to handle the retry, and second it clogs up the server with whatever bit of that WU is left (download), management flags and so on, and the more you get left on the server in this state the more likely you are to trigger retries because the server can't get the data to the port quick enough. Of course with uploads you may have sent the whole lot, only to have the "ACK" fail, and have to send it all again which is an obvious waste of bandwidth.

I'll just say this......
From a purely 'common sense' point of view.
It would seem to me that limiting the number of concurrent connections and temporarily refusing the rest outright while fully supporting those made to completion would better utilize the limitations of the bandwidth we currently have available to the project.

This obviously is not the best or ultimate solution, only more available bandwidth would serve the project better. And then survey and hopefully correct whatever other bottlenecks become apparent at that point.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1177998 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1178005 - Posted: 14 Dec 2011, 10:11:04 UTC - in response to Message 1177998.  

I'll just say this......
From a purely 'common sense' point of view.
It would seem to me that limiting the number of concurrent connections and temporarily refusing the rest outright while fully supporting those made to completion would better utilize the limitations of the bandwidth we currently have available to the project.

This obviously is not the best or ultimate solution, only more available bandwidth would serve the project better. And then survey and hopefully correct whatever other bottlenecks become apparent at that point.


I've had this thought before & I still think it might be a good idea. Instead of the BOINC client connecting directly to the servers it would contact a token server that would issue the client a token to connect to the server it needs. Be it the scheduler, download, or upload server. Then the number of concurrent connections to each server could be controlled & those machines don't get hammered as the token server is the one getting beaten to death. The biggest issue I see with something like that working, besides all of the effort in coding it, is that everyone would need to use the version that supports it.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1178005 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1178006 - Posted: 14 Dec 2011, 10:16:16 UTC - in response to Message 1178005.  
Last modified: 14 Dec 2011, 10:16:49 UTC

I'll just say this......
From a purely 'common sense' point of view.
It would seem to me that limiting the number of concurrent connections and temporarily refusing the rest outright while fully supporting those made to completion would better utilize the limitations of the bandwidth we currently have available to the project.

This obviously is not the best or ultimate solution, only more available bandwidth would serve the project better. And then survey and hopefully correct whatever other bottlenecks become apparent at that point.


I've had this thought before & I still think it might be a good idea. Instead of the BOINC client connecting directly to the servers it would contact a token server that would issue the client a token to connect to the server it needs. Be it the scheduler, download, or upload server. Then the number of concurrent connections to each server could be controlled & those machines don't get hammered as the token server is the one getting beaten to death. The biggest issue I see with something like that working, besides all of the effort in coding it, is that everyone would need to use the version that supports it.
Why would a different version be required? Simply redirect all inquiries to the 'proxy' or interim server and have it handle things from there...
I think that is why a certain proxy server at one time was handling things wonderfully. (Not now working for me, for whatever reason).
It may have been cacheing comms and trying and retrying on it's own, and just passing forth what was successful to the host.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1178006 · Report as offensive
1 · 2 · Next

Message boards : Number crunching : server upload /download settings


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.