Uploading

Message boards : Number crunching : Uploading
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 11 · 12 · 13 · 14 · 15 · Next

AuthorMessage
Profile Jim Baize
Volunteer tester

Send message
Joined: 6 May 00
Posts: 758
Credit: 149,536
RAC: 0
United States
Message 138064 - Posted: 18 Jul 2005, 2:52:17 UTC - in response to Message 138061.  

is asymptotically approaching zero.


so, what you are saying is it is approaching zero with no symptoms?

I don't think this what you meant to say.

Jim
ID: 138064 · Report as offensive
Profile Siran d'Vel'nahr
Volunteer tester
Avatar

Send message
Joined: 23 May 99
Posts: 7379
Credit: 44,181,323
RAC: 238
United States
Message 138120 - Posted: 18 Jul 2005, 4:23:29 UTC - in response to Message 137902.  

....
....
Damn.
....
Dumbed down......heh......its different humour and needs presenting more slowly perhaps.......but dumbing down !!!!

Such a screwed up attitude.... From reading your posts, I suggest looking into a mirror much more often....

(In a Homer Simpson voice, with special effects of slap on forehead)
Duh, missed that one!

(Normal voice and no specials)
Guess you missed that humour also.

;P

BTW: Sorry, no mirrors down here. They break too easily...
(Mirror, mirror, On the wall, Who is... ;) )

Cheers,
Martin

Martin, Vulcans are straight-faced when it comes to humor.... >;-)

On a serious note: I did not see any humor in the put-down of the citizens of this country, even if it was in jest. This is an International board and I for one do NOT say such things of another country's people, not even in fun. It tends to ruffle some feathers. And the guy I replied to has been ruffling mine for some time anyway....

Keep BOINCing.... >:-)

CAPT Siran d'Vel'nahr - L L & P _\\//
Winders 11 OS? "What a piece of junk!" - L. Skywalker
"Logic is the cement of our civilization with which we ascend from chaos using reason as our guide." - T'Plana-hath
ID: 138120 · Report as offensive
Bart Barenbrug

Send message
Joined: 7 Jul 04
Posts: 52
Credit: 337,401
RAC: 0
Netherlands
Message 138216 - Posted: 18 Jul 2005, 12:31:14 UTC - in response to Message 137788.  

The basic idea used in this is collision detection and back off like what is used in Ethernet. The trouble with this is that saturation starts to take place when the network starts to get above 60% of capacity... (others say it is efficient into the 90's but I think they are dreaming).

One of the things that makes it worse now is that every client with a reasonable queue is trying to connect every few minutes or so, because there's always one of the WUs that reaches its "let's try again" time. Maybe that should be made global for all results of a client, so the servers would get less load. That small change is client-side only, so it could be implemented quite easily. To better profit from it, a further change could be that once a client does set up a connection, it would use the connection to its full extent by uploading all its finished results (with proper time-outs of course, and a smart client might upload in earlier-deadline first order). So not to break the connection once one upload is done, but keep it if more is to be communicated (this means the client would be in charge of breaking the connection, so the server would need to have a good time-out set to it). Such a change would require server-side adaptations, I guess, so would be harder to implement.

I also seem to remember a time that a clients would collect a few WUs before first trying to upload them (which is fine if the deadline is not near). That could also relieve pressure on the servers, if combined with "once a connection is there, use it until nothing more needs to be communicated" strategy.


Worse, to implement it would require a change in both the clinets and the servers and this would have to be done across all projects at the same time ... :(

I don't see how to make this work ... :(


A suggestion that's even worse in that respect: it seems that server-side hardware shortage is common. Where-as client-side, there is hardware aplenty (especially cpu cycles, which at the moment seem to be the problem if a server is completely cpu-bound). Now servers do completely different things than clients, and need different resources, and can be considered reliable whereas clients can not (in a security sense), but would it be possible to distribute the work a little more? Maybe let clients that work on the same work unit get in contact with each other (in a peer to peer manner) and handle validation (redundantly) amongst themselves, so only validated results would have to be reported back to the servers. I know this is long-term, might not combine well with dial-up clients, and needs a lot more work to get the details right, so take just take it as an example of asking one-self: what else could we decentralise and let clients help us out with. The answer is probably "not much more than is already done", since I'm sure the devs have asked the same question. But it can't hurt to revisit questions like these now and again, so consider this post a friendly reminder. ;)
ID: 138216 · Report as offensive
Profile Paul D. Buck
Volunteer tester

Send message
Joined: 19 Jul 00
Posts: 3898
Credit: 1,158,042
RAC: 0
United States
Message 138219 - Posted: 18 Jul 2005, 12:39:08 UTC - in response to Message 138216.  

Maybe let clients that work on the same work unit get in contact with each other (in a peer to peer manner) and handle validation (redundantly) amongst themselves, so only validated results would have to be reported back to the servers.

With that code on the clients, and with the program being open source, that means that there is the possibility of some one changing the validation code ...
ID: 138219 · Report as offensive
Bart Barenbrug

Send message
Joined: 7 Jul 04
Posts: 52
Credit: 337,401
RAC: 0
Netherlands
Message 138223 - Posted: 18 Jul 2005, 12:50:25 UTC - in response to Message 138219.  

Maybe let clients that work on the same work unit get in contact with each other (in a peer to peer manner) and handle validation (redundantly) amongst themselves, so only validated results would have to be reported back to the servers.

With that code on the clients, and with the program being open source, that means that there is the possibility of some one changing the validation code ...


Of course. Which is why the validation would need to be done redundantly (maybe on all clients that computed the result, or even on different clients). So server-side there would still be some sort of validation required, but only of the validation process, not of the results. Might not even make a big change in validation effort (cpu-cycle wise), but if clients could forego reporting results that they know are bad, that could act as a separate filter, relieving some server-side pressure (connection-wise). But maybe it won't help much. Just a not-worked-out idea in the line of "could client cpus be helpfull if servers are cpu-bound").
ID: 138223 · Report as offensive
Profile Contact
Volunteer tester
Avatar

Send message
Joined: 16 Jan 00
Posts: 197
Credit: 2,249,004
RAC: 0
Canada
Message 138251 - Posted: 18 Jul 2005, 14:26:24 UTC

May have got lucky, but i was just able to upload/report.
Real quick. No dropouts!
ID: 138251 · Report as offensive
Profile trux
Volunteer tester
Avatar

Send message
Joined: 6 Feb 01
Posts: 344
Credit: 1,127,051
RAC: 0
Czech Republic
Message 138254 - Posted: 18 Jul 2005, 14:48:30 UTC - in response to Message 138223.  

Maybe let clients that work on the same work unit get in contact with each other (in a peer to peer manner) and handle validation (redundantly) amongst themselves, so only validated results would have to be reported back to the servers.

Unfortuntely that isn't a feasable way without having an exchange server (eliminating so the advantage of off-site validation) - clients would be only rarely capable to contact each other - they are usualy not permanently online, they use dynamic IP addresses, they would need to act as servers to be able receiving data from other clients and that's not always possible, they are behind firewalls or proxy servers, etc.
Better idea might be decentralizing the system and having multiple download/upload servers, validators, and other servers around the world that could then communicate with the central servers in a more efficient way. If I remember well, it worked kinda in this way in S@H Classic - there were couple of WU proxy servers that people could use instead of communicating directly with Berkeley.
trux
BOINC software
Freediving Team
Czech Republic
ID: 138254 · Report as offensive
Don Erway
Volunteer tester

Send message
Joined: 18 May 99
Posts: 305
Credit: 471,946
RAC: 0
United States
Message 138256 - Posted: 18 Jul 2005, 14:49:52 UTC

I'm not having any luck at all.

"Retry Now" never succeeds at delivering any results.

My slower machine - athlon xp2100 is now up to 20 completed WUs, that it is trying to upload. Some of them are due the 24th.

My faster machine - P4 2.4 running at 2.7, is at 10. It seems to have more "luck", getting them in. Oldest one is not due until the 27th.

I am on cable, with 5 meg down, and 350k up.

Any news?

Don

ID: 138256 · Report as offensive
timethief

Send message
Joined: 1 Jan 04
Posts: 25
Credit: 545,474
RAC: 0
Germany
Message 138257 - Posted: 18 Jul 2005, 14:57:45 UTC

A little trick and some thoughts ...

If you are able to use a proxy for your seti-connection, you might use it now and set the value for connect-retries to a 5 (or higher). I was able to increase the rate of sucessfull delivered results from 2.4% to 27% (yes, I had enough results to calculate this numbers).
This will use the seti server as beast of burden, but every delivered result will not be retried.

In my opinion the current problem is caused by the long outtake followed by a mass of request thereafter. Even if you are able to upload the data, you need a second request to validate them ... or if you get no connection, handle the whole transaction as failure.
So, if the number of request raises beyond a certain point, more communications will fail than there were sucessfull ones. It seems to be a trap and the problem will grow every day as long as new workunits were given out and will be unsuccessfully returned. I think there were two ways out of this trap:
1. stop sending out workunits for a period of time, until the system becomes stable again.
or
2. establish a bottleneck intensional. the bottleneck might be a server which leads out a limited number of 'tickets'. With these tickets another less busy server will be able to process the statefull upload/download transactions in complete. If there is a mass of requests, like this time, only the ticket server will be overwelmed, but it would be able to send out a ticket within a single request. And if you got a ticket, the upload should succeed.

ID: 138257 · Report as offensive
Profile Tigher
Volunteer tester

Send message
Joined: 18 Mar 04
Posts: 1547
Credit: 760,577
RAC: 0
United Kingdom
Message 138261 - Posted: 18 Jul 2005, 15:15:08 UTC - in response to Message 138257.  



In my opinion the current problem is caused by the long outtake followed by a mass of request thereafter. Even if you are able to upload the data, you need a second request to validate them ... or if you get no connection, handle the whole transaction as failure.
So, if the number of request raises beyond a certain point, more communications will fail than there were sucessfull ones. It seems to be a trap and the problem will grow every day as long as new workunits were given out and will be unsuccessfully returned. I think there were two ways out of this trap:



I think you are right. This problem has been predicted, it is getting worse and, in theory, may never end unless we stop processing WUs. I cannot help but feel though there is a touch of something else going on too.....in the server perhaps.

ID: 138261 · Report as offensive
Don Erway
Volunteer tester

Send message
Joined: 18 May 99
Posts: 305
Credit: 471,946
RAC: 0
United States
Message 138269 - Posted: 18 Jul 2005, 15:24:24 UTC

I note that some have modified the seti source code, to allow it to retry uploads every minute. Not randomly, every 2-3 hours!

I'm guessing that is a big part of the problem, denying the rest of us a chance to get in.


ID: 138269 · Report as offensive
Ken Phillips m0mcw
Volunteer tester
Avatar

Send message
Joined: 2 Feb 00
Posts: 267
Credit: 415,678
RAC: 0
United Kingdom
Message 138275 - Posted: 18 Jul 2005, 15:37:25 UTC - in response to Message 138261.  
Last modified: 18 Jul 2005, 15:38:17 UTC



In my opinion the current problem is caused by the long outtake followed by a mass of request thereafter. Even if you are able to upload the data, you need a second request to validate them ... or if you get no connection, handle the whole transaction as failure.
So, if the number of request raises beyond a certain point, more communications will fail than there were sucessfull ones. It seems to be a trap and the problem will grow every day as long as new workunits were given out and will be unsuccessfully returned. I think there were two ways out of this trap:



I think you are right. This problem has been predicted, it is getting worse and, in theory, may never end unless we stop processing WUs. I cannot help but feel though there is a touch of something else going on too.....in the server perhaps.


I came to a similar conclusion myself a few days ago i.e :- more stalled uploads = more doomed requests to upload = more congestion and more failed uploads, more downloads = more failed uploads; so I set my little farm to deplete seti@home (no new work). Everything is peachy this end now (unless my queued uploads go over deadline!), my queue is stable, network bandwidth isn't increasing all the time, and all my hosts are merrily crunching on various other projects.

All I need to do when things pick up, is just re-enable work requests for seti@home with just one little click in boincview (brilliant program), if my units expire before then, then so what? I'll be upset, but it won't kill me.

Ken


Ken Phillips

BOINC question? Look here



"The beginning is the most important part of the work." - Plato
ID: 138275 · Report as offensive
Profile trux
Volunteer tester
Avatar

Send message
Joined: 6 Feb 01
Posts: 344
Credit: 1,127,051
RAC: 0
Czech Republic
Message 138285 - Posted: 18 Jul 2005, 15:51:15 UTC - in response to Message 138269.  

I note that some have modified the seti source code, to allow it to retry uploads every minute. Not randomly, every 2-3 hours!
I'm guessing that is a big part of the problem, denying the rest of us a chance to get in.

1) uploading is not controlled by the S@H aplication, but by BOINC, so modifying SETI app would have no impact at all
2) I doubt there is anyone who'd do it, and even less I believe that if someone really did it, that he'd offer it publicly. Are there any facts behind your claim, or are you just an oponent of Open Source?
trux
BOINC software
Freediving Team
Czech Republic
ID: 138285 · Report as offensive
Don Erway
Volunteer tester

Send message
Joined: 18 May 99
Posts: 305
Credit: 471,946
RAC: 0
United States
Message 138289 - Posted: 18 Jul 2005, 15:53:16 UTC - in response to Message 138285.  
Last modified: 18 Jul 2005, 15:55:06 UTC

I note that some have modified the seti source code, to allow it to retry uploads every minute. Not randomly, every 2-3 hours!
I'm guessing that is a big part of the problem, denying the rest of us a chance to get in.

1) uploading is not controlled by the S@H aplication, but by BOINC, so modifying SETI app would have no impact at all
2) I doubt there is anyone who'd do it, and even less I believe that if someone really did it, that he'd offer it publicly. Are there any facts behind your claim, or are you just an oponent of Open Source?


See this post, where someone shows a log, with "backing off 1 minute, for every retry..."

http://setiathome.berkeley.edu/forum_thread.php?id=17147#137900


And no, I have personally contributed lots to open source, over the last 20 years, including gcc for 68k and lots of small fixes in emacs and nt-emacs.


ID: 138289 · Report as offensive
Profile Tigher
Volunteer tester

Send message
Joined: 18 Mar 04
Posts: 1547
Credit: 760,577
RAC: 0
United Kingdom
Message 138291 - Posted: 18 Jul 2005, 15:53:47 UTC - in response to Message 138285.  

I note that some have modified the seti source code, to allow it to retry uploads every minute. Not randomly, every 2-3 hours!
I'm guessing that is a big part of the problem, denying the rest of us a chance to get in.

1) uploading is not controlled by the S@H aplication, but by BOINC, so modifying SETI app would have no impact at all
2) I doubt there is anyone who'd do it, and even less I believe that if someone really did it, that he'd offer it publicly. Are there any facts behind your claim, or are you just an oponent of Open Source?


Agreed.....but never mind the facts.....why on earth would anybody want or see the need to do it. It does not make any sense!


ID: 138291 · Report as offensive
Profile trux
Volunteer tester
Avatar

Send message
Joined: 6 Feb 01
Posts: 344
Credit: 1,127,051
RAC: 0
Czech Republic
Message 138299 - Posted: 18 Jul 2005, 16:04:39 UTC - in response to Message 138289.  

See this post, where someone shows a log, with "backing off 1 minute, for every retry..."

http://setiathome.berkeley.edu/forum_thread.php?id=17147#137900

Yes, I already saw it, but all what I can see is the pretty standard and normal behaviour where the delays start growing from very small values (~1 minute) up to about 3-4 hours and then start falling again. In the log at the link you posted, there are ~8 backing off's with delyas growing from 1 min to 8 minutes, and that's the exact designed behaviour. Absolutely nothing supporting your claims, as Tigher wrote, I also do not see any reason why anyone would do such change. And even if someone did it, he would certainly not offer it publicly.

trux
BOINC software
Freediving Team
Czech Republic
ID: 138299 · Report as offensive
Profile Jim Baize
Volunteer tester

Send message
Joined: 6 May 00
Posts: 758
Credit: 149,536
RAC: 0
United States
Message 138329 - Posted: 18 Jul 2005, 16:42:25 UTC - in response to Message 138291.  

Agreed.....but never mind the facts.....why on earth would anybody want or see the need to do it. It does not make any sense!


Why do it? for the same reason that people complain when they can't get their uploads to go through.

Some people complain about the uploads not going through, others have done something about it.

Granted, I believe that the people who have changed their BOINC client to do this are thinking only of themselves and not of the project as a whole. They are adding to the problem, but they have done something rather than just complain.

So, it makes sense to me why to do it if one is to only think about himself.

Jim
ID: 138329 · Report as offensive
Profile Tigher
Volunteer tester

Send message
Joined: 18 Mar 04
Posts: 1547
Credit: 760,577
RAC: 0
United Kingdom
Message 138331 - Posted: 18 Jul 2005, 16:47:32 UTC - in response to Message 138329.  

Agreed.....but never mind the facts.....why on earth would anybody want or see the need to do it. It does not make any sense!


Why do it? for the same reason that people complain when they can't get their uploads to go through.

Some people complain about the uploads not going through, others have done something about it.

Granted, I believe that the people who have changed their BOINC client to do this are thinking only of themselves and not of the project as a whole. They are adding to the problem, but they have done something rather than just complain.

So, it makes sense to me why to do it if one is to only think about himself.

Jim


But if they have done this....does it not just make it worse not better.....even for them?

Regards
Ian


ID: 138331 · Report as offensive
Profile Jim Baize
Volunteer tester

Send message
Joined: 6 May 00
Posts: 758
Credit: 149,536
RAC: 0
United States
Message 138335 - Posted: 18 Jul 2005, 16:51:45 UTC - in response to Message 138331.  

Good question.

If people have made these kinds of changes to the BOINC client then they are helping themselves in the short run. Today or right now they get their results uploaded; however, they add network congestion.

In the long run it hurts everyone.

Jim


But if they have done this....does it not just make it worse not better.....even for them?

Regards
Ian


ID: 138335 · Report as offensive
Profile Tigher
Volunteer tester

Send message
Joined: 18 Mar 04
Posts: 1547
Credit: 760,577
RAC: 0
United Kingdom
Message 138337 - Posted: 18 Jul 2005, 16:58:27 UTC - in response to Message 138335.  

Good question.

If people have made these kinds of changes to the BOINC client then they are helping themselves in the short run. Today or right now they get their results uploaded; however, they add network congestion.

In the long run it hurts everyone.

Jim


But if they have done this....does it not just make it worse not better.....even for them?

Regards
Ian



You're right for sure there. Let's hope they read your assessment and apply a little logic to what they are doing to themselves and the rest of us.


ID: 138337 · Report as offensive
Previous · 1 . . . 11 · 12 · 13 · 14 · 15 · Next

Message boards : Number crunching : Uploading


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.