Message boards :
Number crunching :
Transfers stalled and the click retry and there's instant BW, why is this...?
Message board moderation
Author | Message |
---|---|
TRuEQ & TuVaLu Send message Joined: 4 Oct 99 Posts: 505 Credit: 69,523,653 RAC: 10 |
I run mainly Seti ap tasks on my ATI. I do get tasks. When they start they do nicely. But after a few percent they do stop and will retry after some time... I do click retry button and woops they start almost instant again and will stop again after a few percent of dl. This seem strange to me. There seems to be BW since dl starts directly after clicking the retry button is clicked. Anyone have a clue to why transfers stop(stalls) all the time?? |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
I run mainly Seti ap tasks on my ATI. I dunno, but I see this all the time. Transfer starts, mucho transfer rate....less....less.....lesss.....lessssss....stall. Not uncommon. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
TRuEQ & TuVaLu Send message Joined: 4 Oct 99 Posts: 505 Credit: 69,523,653 RAC: 10 |
Anyone else got an idea?? |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
Anyone have a clue to why transfers stop(stalls) all the time?? Because as always, the number of downloads in progress is well above the amount needed to saturate the 100 Mbits/sec Hurricane link, (maybe up to 5 or 6 times): Graphs for gigabitethernet2_3 Claggy |
TRuEQ & TuVaLu Send message Joined: 4 Oct 99 Posts: 505 Credit: 69,523,653 RAC: 10 |
Anyone have a clue to why transfers stop(stalls) all the time?? Yes, but that doesn't explain why there is BW as soon as the retry button is pushed. As it was a couple of years ago when a stalled transfer got retried it still was stalled. That makes sence. But not as it is now. |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
Yes, but that doesn't explain why there is BW as soon as the retry button is pushed. BW? What does that mean? Claggy |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
Yes, but that doesn't explain why there is BW as soon as the retry button is pushed. Bandwidth |
ExchangeMan Send message Joined: 9 Jan 00 Posts: 115 Credit: 157,719,104 RAC: 0 |
I noticed this repeatedly. You get a little bandwidth after toggling network activity, then it goes back to normal slow or stalled out again. I would really like to know the reason for this anomoly. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
I noticed this repeatedly. You get a little bandwidth after toggling network activity, then it goes back to normal slow or stalled out again. I would really like to know the reason for this anomoly. My guess is that the server software (apache or nginx) isn't very efficient when negotiating with libcurl for the resend of dropped packets over a busy link with lots of collisions. I suspect even the NAK packets fail to get through, so the two of them end up in deadlock and timeout. When you retry, they at least start in synch, and stay in synch until they next need to resend a dropped packet. Note that when uploading files, the very fast transfer of the first 16K of the file just represents an internal transfer of 16K from BOINC into a local transmission buffer. I think. |
Wedge009 Send message Joined: 3 Apr 99 Posts: 451 Credit: 431,396,357 RAC: 553 |
The impression I get is that the server tends to drop connections after a while, which leads to time-outs at the client's end. So a fresh connection might get a burst of download, but then it gets lost after a while, or even after a few seconds, because of the huge demand on the server. Looking at your location and RAC, though, I don't think you have as much of a bandwidth problem as some others. ;) Edit: Gah, Richard beat me to the post. Soli Deo Gloria |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
IMO whatever TCP congestion avoidance algorithm is in use is likely showing that "instant BW" effect for each new connection. The cause was already stated, there's more work assigned to be downloaded than can fit through the pipe. Joe |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
The clue on my Windows 7 machine is that during the first part of the transfer, it goes much faster than my link will support. I believe that what might be happening is that BOINC is seeing the transfer to some internal buffer initially, and then looking at the end to end transfer when it times out. Try changing one of the options in cc_config.xml to see if the problem goes away. <http_transfer_timeout>seconds</http_transfer_timeout> to maybe 3500 rather than 300 and see what happens. BOINC WIKI |
Horacio Send message Joined: 14 Jan 00 Posts: 536 Credit: 75,967,266 RAC: 0 |
IMO whatever TCP congestion avoidance algorithm is in use is likely showing that "instant BW" effect for each new connection. The cause was already stated, there's more work assigned to be downloaded than can fit through the pipe.Joe Wouldnt be possible to throttle the splitters (or the feeder, or the scheduller or whatever is needed) in some way so they dont produce/assign more files to be transfered than the pipes can handle? I know that means less work available to be assigned but it doesnt help to have it assigned if you cant download it... Specially if they fail and need to be retried over and over again wasting a lot of bandwith... |
Mr. Kevvy Send message Joined: 15 May 99 Posts: 3776 Credit: 1,114,826,392 RAC: 3,319 |
Seems that what's ruining it for me is the Astropulse work units. They are so large that they inevitably time out 1-2% in to the download. Their time is estimate is also ridiculous... for me they show 159 hours, but they take about 1-2% of that to complete! So, since the BOINC client thinks there's enough GPU work it won't request it for any other projects, and the GPUs sit idle. So much for my wanting to get GPU Astropulse running. :^p I've just babysat two GPU AP downloads turning off/on network connectivity about twenty times in twenty minutes to restart it and have yet to complete one... |
Terror Australis Send message Joined: 14 Feb 04 Posts: 1817 Credit: 262,693,308 RAC: 44 |
What I'm finding is that Uploads kill the Downloads. I will have downloads creeping along at around 2.5 to 3.5 kBs, slow but still happening. Then when an upload happens, as soon as the upload is finished the downloads stall until I use the "suspend network activity/restart network activity" trick. After this the downloads creep along ok until the next upload. Most annoying when 2/3rds of the units on board are shorties. T.A. |
TRuEQ & TuVaLu Send message Joined: 4 Oct 99 Posts: 505 Credit: 69,523,653 RAC: 10 |
IMO whatever TCP congestion avoidance algorithm is in use is likely showing that "instant BW" effect for each new connection. The cause was already stated, there's more work assigned to be downloaded than can fit through the pipe.Joe Isn't there a way to limit the connections to say 97% and then thoose 97% will not get stalled transfers. It will lead to some "server does not respond". But the flow of completed tasks will clear the stalled data "in the pipe"(router). |
TRuEQ & TuVaLu Send message Joined: 4 Oct 99 Posts: 505 Credit: 69,523,653 RAC: 10 |
IMO whatever TCP congestion avoidance algorithm is in use is likely showing that "instant BW" effect for each new connection. The cause was already stated, there's more work assigned to be downloaded than can fit through the pipe.Joe +1 |
TRuEQ & TuVaLu Send message Joined: 4 Oct 99 Posts: 505 Credit: 69,523,653 RAC: 10 |
What I'm finding is that Uploads kill the Downloads. Is that local at your computor or at the server?? It sounds to me that it is local problem with half duplex/full duplex network interface(NiC). If it was server side problem with ul/dl more people would experience this problem. |
Cherokee150 Send message Joined: 11 Nov 99 Posts: 192 Credit: 58,513,758 RAC: 74 |
Here's what I see happening now that many AP units are being sent: 1. In each group of new tasks sent I get an AP or two. 2. Fairly quickly I get two APs currently downloading. 3. These two soon stall. 4. Once stalled they eventually timeout with a ridiculous backoff of between four to five hours. 5. While these APs are stalled and/or backed off BOINC and/or SETI refuse to send my results and also refuse to request any new tasks, AP -or- MB for CPU or GPU. 6. This timeout/backoff cycle continues for such a long time (days) that my faster rigs run completely out of GPU MB and sometimes even out of CPU MB tasks! All this because of two lonely AP units that are stuck for days if left untouched. So, until someone fixes this flaw in the transfer system, here is my question: Is there an override option that will allow more than two downloads at a time so that the two stuck AP downloads, which is now a constant problem, will not prevent me from getting any more MB CPU/GPU units? I think that more than two transfers at a time (say, perhaps, four) may be at least a partial workaround that could keep my rigs running quite a bit longer before running out of work thanks to just two stuck APs. The only other option I see at this time is to turn off AP processing, which I really do not want to do. Thanks for any help you can give me with this! |
William Send message Joined: 14 Feb 13 Posts: 2037 Credit: 17,689,662 RAC: 0 |
Here's what I see happening now that many AP units are being sent: I hate to break the news, but it's not a bug, it's by design. Those ridiculous backoffs are the reason quite a few of the hardcore crunchers are still on 6.10.x. Maybe it's time to try and convince David again that those long backoff wern't a good idea after all (at least not for this project) Is there an override option that will allow more than two downloads at a time so that the two stuck AP downloads, which is now a constant problem, will not prevent me from getting any more MB CPU/GPU units? yes. you need a cc_config.xml file in your Boinc data dir. details here <max_file_xfers>N</max_file_xfers> Maximum number of simultaneous file transfers (default 8). <max_file_xfers_per_project>N</max_file_xfers_per_project> Maximum number of simultaneous file transfers per project (default 2). you want to up the 'per project' setting. sorry it's too early in the morning to come up with detailed instructions. If you get stuck ask again. I think that more than two transfers at a time (say, perhaps, four) may be at least a partial workaround that could keep my rigs running quite a bit longer before running out of work thanks to just two stuck APs. The only other option I see at this time is to turn off AP processing, which I really do not want to do. To alleviate the backoff problem, you need something to periodically reset the backoffs. That can be achived with SIV (see here) or by using windows own task scheduler and something like this HTH William the Silent A person who won't read has no advantage over one who can't read. (Mark Twain) |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.